Red Hat Bugzilla – Bug 995193
[RHEL 6.4] vgchange -cn vgname --config 'global {locking_type = 0}' does not work when using mirrored LV device.
Last modified: 2013-11-21 18:26:39 EST
***Update to case description*** Going back through the case, I opened this with the wrong problem description. It was *NOT* using software RAID (mdadm) devices. The command is failing when using a mirrored LV device. Description should be: "Steps to Reproduce: 1. create mirrored LV device, e.g.: # lvcreate -L 200m -m1 -n mirrorlv testvg /dev/vdb1 /dev/vdc1 2. # vgchange -an testvg 3. set clustered attribute on VG, # vgchange -cy testvg 4. try to remove clustered attribute on VG by using # vgchange -cn testvg --config 'global {locking_type = 0}' Actual Results: WARNING: Locking disabled. Be careful! This could corrupt your metadata. Unable to determine exclusivity of mirrorlv Mirror logical volumes must be inactive when changing the cluster attribute. The mirrored LV is inactive and this should be successful. Expected Results: Successfully remove the clustered attribute flag"
Created attachment 784936 [details] output from vgchange -cn testvg --config '{global locking_type = 0}' fail using mirrored LV Output requested on vgchange -vvvv command.
Fix committed upstream: commit abc89422af75fa9e20d24285d1366e4631cb8748 Author: Jonathan Brassow <jbrassow@redhat.com> Date: Mon Aug 12 13:56:47 2013 -0500 Mirror: Fix inability to remove VG's cluster flag if it contains a mirror According to bug 995193, if a volume group 1) contains a mirror 2) is clustered 3) 'locking_type' = 0 is used then it is not possible to remove the 'c'luster flag from the VG. This is due to the way _lv_is_active behaves. We shouldn't allow the cluster flag to be flipped unless the mirrors in the cluster are not active. This is because different kernel modules are used depending on whether a mirror is cluster or not. When we attempt to see if the mirror is active, we first check locally. If it is not, then we attempt to check for remotely active instances if the VG is clustered. Since the no_lock locking type is LCK_CLUSTERED, but does not implement 'query_resource', remote_lock_held will always return an error in this case. An error from remove_lock_held is treated as though the lock _is_ held (i.e. the LV is active remotely). This blocks the cluster flag from changing. The solution is to implement 'query_resource' for the no_lock type. It will report a message and return 1. This will allow _lv_is_active to function properly. The LV would be considered not active remotely and the VG can change its flag.
[root@virt-012 ~]# lvcreate -m 1 -n mirror -L1G clustered Logical volume "mirror" created waited for sync [root@virt-012 ~]# vgchange -an clustered 0 logical volume(s) in volume group "clustered" now active [root@virt-012 ~]# vgchange -cn clustered --config 'global {locking_type = 0}' WARNING: Locking disabled. Be careful! This could corrupt your metadata. Volume group "clustered" successfully changed Marking verified with lvm2-2.02.100-4.el6.x86_64 lvm2-cluster-2.02.100-4.el6.x86_64 kernel-2.6.32-420.el6.x86_64
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1704.html