RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 822213 - Disallow vgchange --clustered n if any LVs in the VG are active on remote nodes
Summary: Disallow vgchange --clustered n if any LVs in the VG are active on remote nodes
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: pre-dev-freeze
: ---
Assignee: Joe Thornber
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 672314
Blocks: 697866 756082
TreeView+ depends on / blocked
 
Reported: 2012-05-16 16:12 UTC by Corey Marthaler
Modified: 2023-03-08 07:25 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 672314
Environment:
Last Closed: 2019-06-27 21:49:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Comment 1 RHEL Program Management 2012-07-10 06:38:20 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 2 RHEL Program Management 2012-07-10 23:58:17 UTC
This request was erroneously removed from consideration in Red Hat Enterprise Linux 6.4, which is currently under development.  This request will be evaluated for inclusion in Red Hat Enterprise Linux 6.4.

Comment 3 RHEL Program Management 2012-12-14 07:01:30 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.

Comment 10 Steven J. Levine 2013-07-19 21:00:14 UTC
Marking docs_scoped- since it looks as if this change does not require an update the LVM manual. If that is not correct and this should be added to the user manual, please let me know.

Comment 11 Alasdair Kergon 2013-08-13 17:37:01 UTC
Summary: If using -cn to say that a VG is no longer clustered, it makes no sense for us to allow any of the LVs to remain active on any other node.  Otherwise it would still actually be clustered but we would be encouraging people to pretend it was not!

Comment 12 Alasdair Kergon 2013-09-25 21:58:19 UTC
So we don't currently have a function to tell us definitively whether or not an LV is active on a remote node.  I have a prototype patch but further work is needed.

Comment 16 Alasdair Kergon 2016-01-20 01:39:20 UTC
I think the one outstanding change is to update distribute_command() in clvmd.c so that when CLVMD_FLAG_REMOTE is set it no longer also contacts the local node.

(So for this case, remove add_to_lvmqueue, reduce the number of expected replies by one, and do something sensible if the local node is the only node.  Note that this is a shared code path with requests to contact all nodes.)

Comment 17 Jonathan Earl Brassow 2016-01-21 21:59:58 UTC
We have a function many lv_is_active_*() functions.  We don't have one specifically for this check, but we could.  The capability is there in the base function (_lv_is_active()).

I'll devel_ack+ this one.

Comment 21 Jonathan Earl Brassow 2019-06-27 21:49:51 UTC
appears to be fixed:

[root@bp-02 ~]# lvs vg
  LV   VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg -wi-a----- 100.00m

[root@bp-01 ~]# lvs vg
  LV   VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv   vg -wi------- 100.00m
[root@bp-01 ~]# vgchange -cn vg
  Can't change cluster attribute with active logical volume vg/lv.
  Conversion is supported only for locally exclusive volumes.


Note You need to log in before you can comment on or make changes to this bug.