Bug 289331 - RFE: switching from cluster domain to local domain needs to deactivate volume somehow
Summary: RFE: switching from cluster domain to local domain needs to deactivate volume...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 4
Classification: Red Hat
Component: lvm2
Version: 4.5
Hardware: All
OS: Linux
medium
low
Target Milestone: ---
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
: 235123 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2007-09-13 14:12 UTC by Corey Marthaler
Modified: 2011-01-21 11:52 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-01-21 11:52:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2007-09-13 14:12:29 UTC
Description of problem:
This once again ties into bz 170705 (lvm volume ownership).
If you have a clustered lv and you wish to place that clustered lv in the local
node domain (vgchange -an) then either clvmd/lvm needs to enforce that the user
deactivates the volume before hand, or (and I would argue that this is the
better solution) it needs to deactivate the volume on all nodes except for the
node which the change to local domain cmd was executed. This is especially true
with cmirrors, as you'll run into problems if a deactivation isn't done. In the
non mirror case, you end up with a local lvm volume that is still active on all
nodes in the cluster, and that's a corruption issue waiting to happen.

Version-Release number of selected component (if applicable):
lvm2-cluster-2.02.27-2.el4
lvm2-2.02.27-2.el4

How reproducible:
everytime

Comment 1 Corey Marthaler 2007-09-13 14:21:09 UTC
Another problem that will happen as a result of switching domains is that, when
you switch to the local domain, and then deactivate by hand that volume on the
other nodes, you will be unable to activate it in the single node domain.

[root@link-08 tmp]# vgs
  VG         #PV #LV #SN Attr   VSize  VFree
  VolGroup00   1   2   0 wz--n- 74.41G 96.00M
  corey        8   1   0 wz--n-  2.11T  2.11T
[root@link-08 tmp]# lvs corey/lv
  LV   VG    Attr   LSize Origin Snap%  Move Log Copy%
  lv   corey -wi--- 1.00G
[root@link-08 tmp]# vgchange -ay corey
  0 logical volume(s) in volume group "corey" now active

Once you're in this situation, there's no way that I know of to get that volume
activated unless you recreate it.

Comment 4 Milan Broz 2010-10-21 16:05:08 UTC
*** Bug 235123 has been marked as a duplicate of this bug. ***

Comment 6 Milan Broz 2010-10-21 17:16:06 UTC
Fixed in lvm2-2.02.42-8.el4.

Comment 8 Corey Marthaler 2011-01-19 20:31:58 UTC
Nothing in this bug has been fixed. Putting back into ASSIGNED.

# Change from cluster to local
[root@grant-01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize   VFree
  grant        6   1   0 wz--nc 238.39G 238.30G
[root@grant-01 ~]# vgchange -cn grant
  Volume group "grant" successfully changed
[root@grant-01 ~]# vgchange -an grant
  0 logical volume(s) in volume group "grant" now active
[root@grant-01 ~]# lvs -a -o +devices
  LV       VG         Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv       grant      -wi--- 100.00M                                       /dev/sdc1(0)

# Remains active on other nodes
[root@grant-02 ~]# lvs -a -o +devices
  LV       VG         Attr   LSize   Origin Snap%  Move Log Copy%  Convert Devices
  lv       grant      -wi-a- 100.00M                                       /dev/sdc1(0)

# Deactivate by hand on other nodes
[root@grant-02 ~]# vgchange -an grant
  0 logical volume(s) in volume group "grant" now active

# Still unable to Activate on original local node
[root@grant-01 ~]# vgchange -ay grant
  0 logical volume(s) in volume group "grant" now active

Comment 9 Milan Broz 2011-01-21 11:52:59 UTC
The lvm2 4.9 packages contain check that clustered flag cannot be switched for mirrors and snapshots if active but not for linear/striped and automatic deactivation is not implemented upstream.

After the discussion with Corey we decided that it is better close this WONTFIX for RHEL4 and track problem for new releases (despite that part of problem is fixed in errata - fix is simply not complete yet.)


Note You need to log in before you can comment on or make changes to this bug.