Red Hat Bugzilla – Bug 209320
attempt to lock already locked VG hangs with gulm
Last modified: 2007-11-16 20:14:54 EST
Description of problem:
In a cluster using GuLM, when a second node attempts to exclusively activate a
volume group, the command hangs instead of fails.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. configure GuLM cluster
2. start clvmd
3. on node A, vgchange -aey <vgname>
4. on node B, vgchange -aey <vgname>
The vgchange command on the second node hangs and cannot be interrupted.
The vgchange command on the second node should return an error.
clvmd was looking for LCK_NONBLOCK as the flag for a non-blocking lock, whereas
it has been translated into the (DLM-specific) LKF_NOQUEUE flag by that time.
Checking in daemons/clvmd/clvmd-command.c;
/cvs/lvm2/LVM2/daemons/clvmd/clvmd-command.c,v <-- clvmd-command.c
new revision: 1.14; previous revision: 1.13
Checking in daemons/clvmd/clvmd-gulm.c;
/cvs/lvm2/LVM2/daemons/clvmd/clvmd-gulm.c,v <-- clvmd-gulm.c
new revision: 1.20; previous revision: 1.19
Checking in daemons/clvmd/clvmd-gulm.h;
/cvs/lvm2/LVM2/daemons/clvmd/clvmd-gulm.h,v <-- clvmd-gulm.h
new revision: 1.3; previous revision: 1.2
Checking in lib/locking/locking.h;
/cvs/lvm2/LVM2/lib/locking/locking.h,v <-- locking.h
new revision: 1.29; previous revision: 1.28
Set to POST rather than MODIFIED...
fix verified in lvm2-cluster-2.02.17-1.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.