Bug 1108380
| Summary: | cache pool shouldn't be allowed to be deactivated when an active cache origin exists | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Cache Logical Volumes | QA Contact: | Cluster QE <mspqa-list> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, mcsontos, msnitzer, prajnoha, zkabelac |
| Version: | 7.0 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.111-1.el7 | Doc Type: | Bug Fix |
| Doc Text: |
Initial prove-of-concept implementation of cache-pool allowed to activate unused cache-pool. But since the unused cache-pool has no use, this has been eliminated and lvm2 no longer uses fake cache-pool error target device in dm table.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-03-05 13:08:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2014-06-11 21:54:32 UTC
Next update of upstream lvm package will solve this problem - by not even allowing activation of 'non-existing' cache-pool device (since unlike with thin-pool there is no such target in dm table. Upstream commits starts here: http://www.redhat.com/archives/lvm-devel/2014-April/msg00006.html An observation: I can run `lvchange -ay vg/cpool` and it does nothing, but still returns 0 without a warning. (same for -an, but which is expected - it is not active.) I wonder if disabled operation like this activation/deactivation should return 0? Or Shall it display a warning message? Or is silent do nothing just fine? Unlink with 'thin-pool' cache-pool cannot be activate without origin. There is no 'cache-pool' dm target - it simply doesn't exist - and it's visibility is a pure user abstraction. So I see it 'as filter' just like volume_list filters-out volumes you cannot active - it also returns 0 as success for them. All you could do with cache-pool is to change some of its parameters. So far I believe 0 return code is better then error message for this case. Maybe extra message could be added, like: Ignoring activation of cache-pool, could be only used with cache origin. i.e. lvchage -ay vg with cache-pool would need to return an error as cache-pool is visible LV and every visible LV is changed. After some longer thinking - I've realized there is nice use for cache-pool activation when cache-pool is unused. I'll use to resolve the problem how to cleanup metadata before reusing cache-pool. So 'active' cache-pool will present an cache-pool metadata I'll use it for cleaning of cache-pool metadata which needs some way while I'll be able to still maintain valid metadata. This will only work for unused cache-pool. Cache-pool in use (connected with cache origin) will stay with printing some 'Ignoring' message on activation request. The problem with idea in comment 7 is - such LV would then be showing mismatching info about active volume size. So I guess the best is to simply not allow user to activate cache-pool. However tool internally will use this 'hidden' state for metadata volume clearing - such volume would be active only temporarily and would be left active only in some serious error case - but tool will still handle this state with any other command normally. Fix verified in the latest rpms. No sub volumes that are a part of an active cache volume are allowed to be deactivated. lvm2-2.02.112-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 lvm2-libs-2.02.112-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 lvm2-cluster-2.02.112-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 device-mapper-1.02.91-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 device-mapper-libs-1.02.91-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 device-mapper-event-1.02.91-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 device-mapper-event-libs-1.02.91-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 device-mapper-persistent-data-0.3.2-1.el7 BUILT: Thu Apr 3 09:58:51 CDT 2014 cmirror-2.02.112-1.el7 BUILT: Tue Nov 11 09:39:35 CST 2014 [root@host-118 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices corigin Cwi-a-C--- 4.00g [pool] [corigin_corig] 0.02 3.47 0.00 corigin_corig(0) [corigin_corig] rwi-aoC--- 4.00g 100.00 corigin_corig_rimage_0(0),corigin_corig_rimage_1(0),corigin_corig_rimage_2(0),corigin_corig_rimage_3(0) [corigin_corig_rimage_0] iwi-aor--- 2.00g /dev/sdb1(1) [corigin_corig_rimage_1] iwi-aor--- 2.00g /dev/sde2(1) [corigin_corig_rimage_2] iwi-aor--- 2.00g /dev/sdc1(1) [corigin_corig_rimage_3] iwi-aor--- 2.00g /dev/sdd1(1) [corigin_corig_rmeta_0] ewi-aor--- 4.00m /dev/sdb1(0) [corigin_corig_rmeta_1] ewi-aor--- 4.00m /dev/sde2(0) [corigin_corig_rmeta_2] ewi-aor--- 4.00m /dev/sdc1(0) [corigin_corig_rmeta_3] ewi-aor--- 4.00m /dev/sdd1(0) [lvol0_pmspare] ewi------- 8.00m /dev/sda2(0) [pool] Cwi---C--- 2.00g 0.02 3.47 0.00 pool_cdata(0) [pool_cdata] Cwi-aor--- 2.00g 100.00 pool_cdata_rimage_0(0),pool_cdata_rimage_1(0),pool_cdata_rimage_2(0),pool_cdata_rimage_3(0) [pool_cdata_rimage_0] iwi-aor--- 1.00g /dev/sdd2(1) [pool_cdata_rimage_1] iwi-aor--- 1.00g /dev/sdb2(1) [pool_cdata_rimage_2] iwi-aor--- 1.00g /dev/sda1(1) [pool_cdata_rimage_3] iwi-aor--- 1.00g /dev/sdc2(1) [pool_cdata_rmeta_0] ewi-aor--- 4.00m /dev/sdd2(0) [pool_cdata_rmeta_1] ewi-aor--- 4.00m /dev/sdb2(0) [pool_cdata_rmeta_2] ewi-aor--- 4.00m /dev/sda1(0) [pool_cdata_rmeta_3] ewi-aor--- 4.00m /dev/sdc2(0) [pool_cmeta] ewi-aor--- 8.00m 100.00 pool_cmeta_rimage_0(0),pool_cmeta_rimage_1(0),pool_cmeta_rimage_2(0),pool_cmeta_rimage_3(0) [pool_cmeta_rimage_0] iwi-aor--- 4.00m /dev/sdd2(258) [pool_cmeta_rimage_1] iwi-aor--- 4.00m /dev/sdb2(258) [pool_cmeta_rimage_2] iwi-aor--- 4.00m /dev/sda1(258) [pool_cmeta_rimage_3] iwi-aor--- 4.00m /dev/sdc2(258) [pool_cmeta_rmeta_0] ewi-aor--- 4.00m /dev/sdd2(257) [pool_cmeta_rmeta_1] ewi-aor--- 4.00m /dev/sdb2(257) [pool_cmeta_rmeta_2] ewi-aor--- 4.00m /dev/sda1(257) [pool_cmeta_rmeta_3] ewi-aor--- 4.00m /dev/sdc2(257) [root@host-118 ~]# lvchange -an cache_sanity/pool Unable to change internal LV pool directly [root@host-118 ~]# lvchange -an cache_sanity/pool_cdata Unable to change internal LV pool_cdata directly [root@host-118 ~]# lvchange -an cache_sanity/pool_cmeta Unable to change internal LV pool_cmeta directly [root@host-118 ~]# lvchange -an cache_sanity/corigin_corig Unable to change internal LV corigin_corig directly Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0513.html |