RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 672317 - LVs are unable to be deactivated when switching from local to cluster domain
Summary: LVs are unable to be deactivated when switching from local to cluster domain
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: pre-dev-freeze
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 684868 (view as bug list)
Depends On:
Blocks: 756082
TreeView+ depends on / blocked
 
Reported: 2011-01-24 19:54 UTC by Corey Marthaler
Modified: 2023-03-08 07:25 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-09 21:13:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2011-01-24 19:54:14 UTC
Description of problem:
This is a RHEL6 version of bug 289541. When switching a local active LV to the cluster domain, clvm should activate that node on all cluster nodes. Also, that node as a result of the domain switch, is unable to be deactivated.

[root@grant-01 ~]# pvscan
  PV /dev/sdb1                      lvm2 [34.06 GiB]
  PV /dev/sdb2                      lvm2 [34.06 GiB]
  PV /dev/sdb3                      lvm2 [34.06 GiB]
  PV /dev/sdc1                      lvm2 [45.41 GiB]
  PV /dev/sdc2                      lvm2 [45.41 GiB]
  PV /dev/sdc3                      lvm2 [45.41 GiB]

# Create a local volume
[root@grant-01 ~]# vgcreate -cn test /dev/sd[bc][123]
  Non-clustered volume group "test" successfully created
[root@grant-01 ~]# lvcreate -L 100M -n lv test
  Logical volume "lv" created
[root@grant-01 ~]# lvs -a -o +devices
  LV      VG         Attr   LSize   Devices
  lv      test       -wi-a- 100.00m /dev/sdb1(0)
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  test         6   1   0 wz--n- 238.38g 238.29g
[root@grant-01 ~]# dmsetup ls
test-lv (253, 3)

# Change it to the cluster domain
[root@grant-01 ~]# vgchange -cy test
  Volume group "test" successfully changed
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  test         6   1   0 wz--nc 238.38g 238.29g
[root@grant-01 ~]# lvs
  LV      VG         Attr   LSize   
  lv      test       -wi-a- 100.00m

# Now am unable to deactivate LV
[root@grant-01 ~]# vgchange -an test
  1 logical volume(s) in volume group "test" now active
[root@grant-01 ~]# vgchange -an test
  1 logical volume(s) in volume group "test" now active
[root@grant-01 ~]# vgchange -an test
  1 logical volume(s) in volume group "test" now active

# Also, it's never activated on the other nodes in the cluster
[root@grant-02 ~]# lvs -a -o +devices
  LV      VG         Attr   LSize   Devices
  lv      test       -wi--- 100.00m /dev/sdb1(0)


Version-Release number of selected component (if applicable):
2.6.32-71.el6.x86_64

lvm2-2.02.72-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010
lvm2-libs-2.02.72-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010
lvm2-cluster-2.02.72-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010
udev-147-2.29.el6    BUILT: Tue Aug 31 16:44:10 CDT 2010
device-mapper-1.02.53-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010
device-mapper-libs-1.02.53-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010
device-mapper-event-1.02.53-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010
device-mapper-event-libs-1.02.53-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010
cmirror-2.02.72-8.el6_0.4    BUILT: Thu Dec  9 09:46:33 CST 2010

How reproducible:
Everytime

Comment 1 Alasdair Kergon 2011-01-24 21:07:36 UTC
So what should vgchange -cy do?  Should it find the list of all locally-active LVs in that VG and activate them on other cluster nodes?  What if some of them should be exclusively-activated locally?  What about -cn?  Should it deactivate them on remote nodes?

My answers:
   vgchange -cy / -cn  should not change the activation status of any LVs.

vgchange -a should be used directly for that, so that local/exclusive changes can be dealt with explicitly.

Instead, vgchange -cy should - behind the scenes - issue a 'lvchange -aly' for each already-active local LV, so that clvmd picks up the right state.  (Some variation of --refresh might be another way.)

-cn could fail if any LV is active on other nodes but leave a locally-active LV alone - but would clvmd know to drop the lock without deactivating the LV - could it tell from the no-longer-clustered metadata?

Comment 2 Alasdair Kergon 2011-01-24 21:10:32 UTC
Alterntively, there might be places in the clvmd code where it should re-check the 'clustered' state and modify its behaviour accordingly.

Comment 3 Corey Marthaler 2011-01-27 17:33:38 UTC
This issue appears to occur only if the LV doesn't have a fixed minor number (the 'm' attr in the lvs output). If that attr does exist like volume stripe_4_4172/stripe_4_41720 below, this works fine.

[root@grant-01 ~]# vgchange -cy stripe_4_4172
  Volume group "stripe_4_4172" successfully changed

[root@grant-01 ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree
  stripe_4_4172   4   2   0 wz--nc 476.80g 182.85g

[root@grant-01 ~]# lvs
  LV             VG            Attr   LSize   
  lv             stripe_4_4172 -wi-a- 100.00m
  stripe_4_41720 stripe_4_4172 -wima- 293.86g

[root@grant-01 ~]# vgchange -an stripe_4_4172
  1 logical volume(s) in volume group "stripe_4_4172" now active
[root@grant-01 ~]# vgchange -an stripe_4_4172
  1 logical volume(s) in volume group "stripe_4_4172" now active

Comment 4 Corey Marthaler 2011-01-28 17:56:54 UTC
FWIW, the lvchange cmd makes no difference.

[root@grant-02 ~]# lvs
  LV            VG           Attr   LSize   
  l_2_c         linear_1_639 -wi-a- 100.00m

[root@grant-02 ~]# lvchange -an linear_1_639/l_2_c

[root@grant-02 ~]# lvs
  LV            VG           Attr   LSize   
  l_2_c         linear_1_639 -wi-a- 100.00m

Comment 5 Corey Marthaler 2011-03-14 18:25:19 UTC
*** Bug 684868 has been marked as a duplicate of this bug. ***

Comment 6 RHEL Program Management 2011-04-04 01:45:35 UTC
Since RHEL 6.1 External Beta has begun, and this bug remains
unresolved, it has been rejected as it is not proposed as
exception or blocker.

Red Hat invites you to ask your support representative to
propose this request, if appropriate and relevant, in the
next release of Red Hat Enterprise Linux.

Comment 7 Milan Broz 2011-05-31 09:52:04 UTC
Two problems here seems...

1) what should happen with activated LVs after vgchange -cn
2) why LV with fixed minor number behaves differenly?

Anyway, I think it still need upstream discussion how to handle 1) properly, until that happens cond nack/design.

Comment 10 Alasdair Kergon 2012-05-15 01:13:36 UTC
I stand by comment #2 and think the basic requirement/design is clear (but some implementation details remain to be worked out).

vgchange -c should not activate or deactivate any LVs.
Bug 672314 deals with restricting the use of -c to cases where no LVs need to change state anywhere in the cluster.

This bug then deals with what's left, namely to ensure the clvmd lock state on the local node is updated to match reality after a -c transition.  (Fixing that will hopefully be sufficient to make the various problems reported here go away - different parts of the code use different methods to find out whether or not an LV is active, and without this fix, could get inconsistent answers.)

Comment 16 Jonathan Earl Brassow 2017-10-04 00:51:16 UTC
Let's simply disallow changing the cluster attribute unless all LVs are inactive, no?  Let's not make this harder than it has to be.

Comment 18 Jonathan Earl Brassow 2019-08-06 01:18:49 UTC
(In reply to Jonathan Earl Brassow from comment #16)
> Let's simply disallow changing the cluster attribute unless all LVs are
> inactive, no?  Let's not make this harder than it has to be.

This has already been fixed, no?

Dave, would you mind checking it?

Comment 19 David Teigland 2019-08-06 16:04:31 UTC
1. You can do vgchange -cy or -cn with locally active LVs.

2. You can't do vgchange -cn if there's a remotely active LV.

3. You can do vgchange -cy if there's a remotely active LV (only if system ID is not being used, which it should be.)


The language of this message from point 2 makes it sound like point 1 is working as intended (whether that's a good idea or not):

# vgchange -cn cc
  Can't change cluster attribute with active logical volume cc/lv1.
  Conversion is supported only for locally exclusive volumes.

I suggest closing this as working well enough.

Comment 20 Jonathan Earl Brassow 2020-08-19 21:07:30 UTC
RHEL8 uses new locking mechanism.  If we haven't had customer issues with this so far, I'm fine WONTFIXing this bug.

Comment 21 David Teigland 2020-11-09 21:13:31 UTC
almost made it 10 years :(


Note You need to log in before you can comment on or make changes to this bug.