Bug 289541
Summary: | when changing from local to cluster, volumes can not appear to be deactivated | |||
---|---|---|---|---|
Product: | Red Hat Enterprise Linux 4 | Reporter: | Corey Marthaler <cmarthal> | |
Component: | lvm2-cluster | Assignee: | Jonathan Earl Brassow <jbrassow> | |
Status: | CLOSED WONTFIX | QA Contact: | Cluster QE <mspqa-list> | |
Severity: | low | Docs Contact: | ||
Priority: | medium | |||
Version: | 4.5 | CC: | agk, jbrassow, mbroz, prajnoha | |
Target Milestone: | --- | |||
Target Release: | --- | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Bug Fix | ||
Doc Text: |
When active logical volumes exist in a volume group that is being switched from non-clustered to clustered, the user may be unable to deactivate the logical volume. This issue can be avoided in a number of ways:
1) deactivate a volume group's logical volumes before changing its cluster status
2) activate the volume group cluster-wide after switching its status to 'clustered' (e.g. 'vgchange -ay <VG_name>')
3) re-activate the volume group locally after switching its
status to 'clustered' (e.g. 'vgchange -aly <VG_name>')
Any of the above three work-arounds is sufficient to enable the user to deactivate logical volume. #2 is the most obvious choice and most users will do this as a matter of course, since the objective is to allow multiple machines in a cluster to access the logical volumes.
|
Story Points: | --- | |
Clone Of: | ||||
: | 684868 (view as bug list) | Environment: | ||
Last Closed: | 2011-03-14 16:54:32 UTC | Type: | --- | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 684868 |
Description
Corey Marthaler
2007-09-13 17:40:24 UTC
clvmd doesn't know about the active logical volumes that existed as part of the local volume group, so it skips over them. There are multiple ways to fix this via code and workaround, here are the different possible work-arounds: 1) restart clvmd after switching VG to clustered (not so nice) 2) 'vgchange -aly <VG>' after switching VG to clustered (ok) 3) 'vgchange -ay <VG>' after switching VG to clustered (ok) 4) 'vgchange -an <VG>' before switching VG to clustered (ok) #3 seems like the obvious option, since the user probably wants to use the LVs in the VG that was just changed to clustered. Why would they want to deactivate the LV after switching to clustered... perhaps they forgot. There are probably ways to do this in the code, like: 1) disallow changing the cluster status of a VG if there are /any/ active LVs 2) poking clvmd somehow when the cluster status is changed so it grabs locks appropriately for already open LVs I prefer to release-note this one. Technical note added. If any revisions are required, please edit the "Technical Notes" field accordingly. All revisions will be proofread by the Engineering Content Services team. New Contents: When active logical volumes exist in a volume group that is being switched from non-clustered to clustered, the user may be unable to deactivate the logical volume. This issue can be avoided in a number of ways: 1) deactivate a volume group's logical volumes before changing its cluster status 2) activate the volume group cluster-wide after switching its status to 'clustered' (e.g. 'vgchange -ay <VG_name>') 3) re-activate the volume group locally after switching its status to 'clustered' (e.g. 'vgchange -aly <VG_name>') Any of the above three work-arounds is sufficient to enable the user to deactivate logical volume. #2 is the most obvious choice and most users will do this as a matter of course, since the objective is to allow multiple machines in a cluster to access the logical volumes. Is this still present upstream/rhel6? If so, please open a new bug to discuss a solution that preserves consistency i.e. vgchange -cy should notify clvmd of the change using an existing mechanism. This bug doesn't appear to actually have been fix. It the "fix" just for the tech notes (listed in comment #4) in rhel4.9? [root@taft-01 ~]# vgs VG #PV #LV #SN Attr VSize VFree feist 7 1 0 wz--n- 949.65G 949.55G [root@taft-01 ~]# vgchange -ay feist 1 logical volume(s) in volume group "feist" now active [root@taft-01 ~]# dmsetup ls feist-lv (253, 2) [root@taft-01 ~]# vgs VG #PV #LV #SN Attr VSize VFree feist 7 1 0 wz--n- 949.65G 949.55G [root@taft-01 ~]# vgchange -cy feist Volume group "feist" successfully changed [root@taft-01 ~]# vgs VG #PV #LV #SN Attr VSize VFree VolGroup00 1 2 0 wz--n- 68.12G 0 feist 7 1 0 wz--nc 949.65G 949.55G [root@taft-01 ~]# lvs LV VG Attr LSize Origin Snap% Move Log Copy% Convert lv feist -wi-a- 100.00M [root@taft-01 ~]# vgchange -an feist 1 logical volume(s) in volume group "feist" now active [root@taft-01 ~]# vgchange -an feist 1 logical volume(s) in volume group "feist" now active Putting this back into ASSIGNED... The lvm2 4.9 packages contain check that clustered flag cannot be switched for mirrors and snapshots if active but not for linear/striped and automatic deactivation is not implemented upstream. After the discussion with Corey we decided that it is better close this WONTFIX for RHEL4 and track problem for new releases (despite that part of problem is fixed in errata - fix is simply not complete yet.) We should add comment#4 to the release/technical notes instead (bug removed from lvm2 errata). Closing this bug as decided in comment #9 and opening one for rhel6.1. |