Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Previously, it was impossible to convert a volume group from clustered to non-clustered with a configuration setting of 'locking_type = 0'. This can be problematic if the cluster is unavailable and it is necessary to convert the volume group to non-clustered mode. This issue has been resolved.
***Update to case description***
Going back through the case, I opened this with the wrong problem description. It was *NOT* using software RAID (mdadm) devices. The command is failing when using a mirrored LV device.
Description should be:
"Steps to Reproduce:
1. create mirrored LV device, e.g.:
# lvcreate -L 200m -m1 -n mirrorlv testvg /dev/vdb1 /dev/vdc1
2. # vgchange -an testvg
3. set clustered attribute on VG,
# vgchange -cy testvg
4. try to remove clustered attribute on VG by using
# vgchange -cn testvg --config 'global {locking_type = 0}'
Actual Results:
WARNING: Locking disabled. Be careful! This could corrupt your metadata.
Unable to determine exclusivity of mirrorlv
Mirror logical volumes must be inactive when changing the cluster attribute.
The mirrored LV is inactive and this should be successful.
Expected Results:
Successfully remove the clustered attribute flag"
Created attachment 784936[details]
output from vgchange -cn testvg --config '{global locking_type = 0}' fail using mirrored LV
Output requested on vgchange -vvvv command.
Comment 8Jonathan Earl Brassow
2013-08-12 19:01:50 UTC
Fix committed upstream:
commit abc89422af75fa9e20d24285d1366e4631cb8748
Author: Jonathan Brassow <jbrassow>
Date: Mon Aug 12 13:56:47 2013 -0500
Mirror: Fix inability to remove VG's cluster flag if it contains a mirror
According to bug 995193, if a volume group
1) contains a mirror
2) is clustered
3) 'locking_type' = 0 is used
then it is not possible to remove the 'c'luster flag from the VG. This
is due to the way _lv_is_active behaves.
We shouldn't allow the cluster flag to be flipped unless the mirrors in
the cluster are not active. This is because different kernel modules
are used depending on whether a mirror is cluster or not. When we
attempt to see if the mirror is active, we first check locally. If it
is not, then we attempt to check for remotely active instances if the VG
is clustered. Since the no_lock locking type is LCK_CLUSTERED, but does
not implement 'query_resource', remote_lock_held will always return an
error in this case. An error from remove_lock_held is treated as though
the lock _is_ held (i.e. the LV is active remotely). This blocks the
cluster flag from changing.
The solution is to implement 'query_resource' for the no_lock type. It
will report a message and return 1. This will allow _lv_is_active to
function properly. The LV would be considered not active remotely and
the VG can change its flag.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHBA-2013-1704.html
***Update to case description*** Going back through the case, I opened this with the wrong problem description. It was *NOT* using software RAID (mdadm) devices. The command is failing when using a mirrored LV device. Description should be: "Steps to Reproduce: 1. create mirrored LV device, e.g.: # lvcreate -L 200m -m1 -n mirrorlv testvg /dev/vdb1 /dev/vdc1 2. # vgchange -an testvg 3. set clustered attribute on VG, # vgchange -cy testvg 4. try to remove clustered attribute on VG by using # vgchange -cn testvg --config 'global {locking_type = 0}' Actual Results: WARNING: Locking disabled. Be careful! This could corrupt your metadata. Unable to determine exclusivity of mirrorlv Mirror logical volumes must be inactive when changing the cluster attribute. The mirrored LV is inactive and this should be successful. Expected Results: Successfully remove the clustered attribute flag"