Bug 289331 - RFE: switching from cluster domain to local domain needs to deactivate volume somehow
RFE: switching from cluster domain to local domain needs to deactivate volume...
Status: CLOSED WONTFIX
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2 (Show other bugs)
4.5
All Linux
medium Severity low
: ---
: ---
Assigned To: Jonathan Earl Brassow
Cluster QE
: FutureFeature
: 235123 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-09-13 10:12 EDT by Corey Marthaler
Modified: 2011-01-21 06:52 EST (History)
8 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-01-21 06:52:59 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2007-09-13 10:12:29 EDT
Description of problem:
This once again ties into bz 170705 (lvm volume ownership).
If you have a clustered lv and you wish to place that clustered lv in the local
node domain (vgchange -an) then either clvmd/lvm needs to enforce that the user
deactivates the volume before hand, or (and I would argue that this is the
better solution) it needs to deactivate the volume on all nodes except for the
node which the change to local domain cmd was executed. This is especially true
with cmirrors, as you'll run into problems if a deactivation isn't done. In the
non mirror case, you end up with a local lvm volume that is still active on all
nodes in the cluster, and that's a corruption issue waiting to happen.

Version-Release number of selected component (if applicable):
lvm2-cluster-2.02.27-2.el4
lvm2-2.02.27-2.el4

How reproducible:
everytime
Comment 1 Corey Marthaler 2007-09-13 10:21:09 EDT
Another problem that will happen as a result of switching domains is that, when
you switch to the local domain, and then deactivate by hand that volume on the
other nodes, you will be unable to activate it in the single node domain.

[root@link-08 tmp]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup00   1   2   0 wz--n- 74.41G 96.00M
  corey        8   1   0 wz--n-  2.11T  2.11T
[root@link-08 tmp]# lvs corey/lv
  LV   VG    Attr   LSize Origin Snap%  Move Log Copy%
  lv   corey -wi--- 1.00G
[root@link-08 tmp]# vgchange -ay corey
  0 logical volume(s) in volume group "corey" now active

Once you're in this situation, there's no way that I know of to get that volume
activated unless you recreate it.
Comment 4 Milan Broz 2010-10-21 12:05:08 EDT
*** Bug 235123 has been marked as a duplicate of this bug. ***
Comment 6 Milan Broz 2010-10-21 13:16:06 EDT
Fixed in lvm2-2.02.42-8.el4.
Comment 8 Corey Marthaler 2011-01-19 15:31:58 EST
Nothing in this bug has been fixed. Putting back into ASSIGNED.

# Change from cluster to local
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  grant        6   1   0 wz--nc 238.39G 238.30G
[root@grant-01 ~]# vgchange -cn grant
  Volume group "grant" successfully changed
[root@grant-01 ~]# vgchange -an grant
  0 logical volume(s) in volume group "grant" now active
[root@grant-01 ~]# lvs -a -o +devices
  LV       VG         Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv       grant      -wi--- 100.00M                                       /dev/sdc1(0)

# Remains active on other nodes
[root@grant-02 ~]# lvs -a -o +devices
  LV       VG         Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv       grant      -wi-a- 100.00M                                       /dev/sdc1(0)

# Deactivate by hand on other nodes
[root@grant-02 ~]# vgchange -an grant
  0 logical volume(s) in volume group "grant" now active

# Still unable to Activate on original local node
[root@grant-01 ~]# vgchange -ay grant
  0 logical volume(s) in volume group "grant" now active
Comment 9 Milan Broz 2011-01-21 06:52:59 EST
The lvm2 4.9 packages contain check that clustered flag cannot be switched for mirrors and snapshots if active but not for linear/striped and automatic deactivation is not implemented upstream.

After the discussion with Corey we decided that it is better close this WONTFIX for RHEL4 and track problem for new releases (despite that part of problem is fixed in errata - fix is simply not complete yet.)

Note You need to log in before you can comment on or make changes to this bug.