Bug 1255171
| Summary: | cache_policy no longer displays anything in lvs | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Cache Logical Volumes | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | high | CC: | agk, heinzm, jbrassow, msnitzer, prajnoha, zkabelac |
| Version: | 7.2 | Keywords: | Regression, Triaged |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.129-2.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-11-19 12:47:24 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2015-08-19 20:29:38 UTC
Little easier to see the problem here and fwiw 'cache_policy' and 'cachepolicy' provide the same result in each example. lvm2-2.02.127-1.el7.x86_64 [root@host-115 ~]# lvs -o devices,cachepolicy Devices Cache Policy display_cache_corig(0) smq lvm2-2.02.128-1.el7.x86_64 [root@host-109 ~]# lvs -o devices,cachepolicy Devices Cache Policy display_cache_corig(0) This is most likely fallout from bug 1255184. The only major bug ATM is that lvchange let you change the cache policy. This operation is not yet working properly. For now to get 'smq' cached LV - you need to directly create it with such policy. Change from mq->smq or smq->mq is not possible without clearing cache first and reinitializing metadata. Under normal circumstance cache policy should by only shown with cached device (and I'll likely add some for cache pool) After following your instructions in comment #3 by not using 'lvchange', the same problem persists even with a properly cached device (pool + origin). # lvm2-2.02.128-1.el7.x86_64 [root@host-109 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices display_cache -wi-a----- 4.00g /dev/sda1(0) [root@host-109 ~]# lvcreate --yes --cachepolicy smq -L 4G -n pool cache_sanity --type cache-pool --cachemode writethrough -c 64 /dev/sdc1 Logical volume "pool" created. # Nothing displayed [root@host-109 ~]# lvs -a -o +devices,cache_policy,cache_mode LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices Cache Policy Cachemode display_cache -wi-a----- 4.00g /dev/sda1(0) [lvol0_pmspare] ewi------- 8.00m /dev/sdc1(0) pool Cwi---C--- 4.00g pool_cdata(0) writethrough [pool_cdata] Cwi------- 4.00g /dev/sdc1(4) [pool_cmeta] ewi------- 8.00m /dev/sdc1(2) [root@host-109 ~]# lvconvert --yes --type cache --cachepool cache_sanity/pool cache_sanity/display_cache Logical volume cache_sanity/display_cache is now cached. # Still nothing displayed [root@host-109 ~]# lvs -a -o +devices,cache_policy,cache_mode LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices Cache Policy Cachemode display_cache Cwi-a-C--- 4.00g [pool] [display_cache_corig] 0.00 6.59 100.00 display_cache_corig(0) writethrough [display_cache_corig] owi-aoC--- 4.00g /dev/sda1(0) [lvol0_pmspare] ewi------- 8.00m /dev/sdc1(0) [pool] Cwi---C--- 4.00g 0.00 6.59 100.00 pool_cdata(0) writethrough [pool_cdata] Cwi-ao---- 4.00g /dev/sdc1(4) [pool_cmeta] ewi-ao---- 8.00m /dev/sdc1(2) [root@host-109 ~]# dmsetup status cache_sanity-display_cache: 0 8388608 cache 8 135/2048 128 0/65536 0 61 0 0 0 0 0 1 writethrough 2 migration_threshold 2048 smq 0 rw - cache_sanity-display_cache_corig: 0 8388608 linear cache_sanity-pool_cdata: 0 8388608 linear cache_sanity-pool_cmeta: 0 16384 linear # lvm2-2.02.127-1.el7 # same operations as above before this convert... [root@host-115 ~]# lvconvert --yes --type cache --cachepool cache_sanity/pool cache_sanity/display_cache Logical volume cache_sanity/display_cache is now cached. # here the proper policy is displayed. [root@host-115 ~]# lvs -a -o +devices,cache_policy,cache_mode LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices Cache Policy Cachemode display_cache Cwi-a-C--- 4.00g [pool] [display_cache_corig] 0.00 6.59 100.00 display_cache_corig(0) smq writethrough [display_cache_corig] owi-aoC--- 4.00g /dev/sde1(0) [lvol0_pmspare] ewi------- 8.00m /dev/sdc1(0) [pool] Cwi---C--- 4.00g 0.00 6.59 100.00 pool_cdata(0) writethrough [pool_cdata] Cwi-ao---- 4.00g /dev/sdc1(4) [pool_cmeta] ewi-ao---- 8.00m /dev/sdc1(2) [root@host-115 ~]# dmsetup status cache_sanity-display_cache: 0 8388608 cache 8 135/2048 128 0/65536 0 61 0 0 0 0 0 1 writethrough 2 migration_threshold 2048 smq 0 rw - cache_sanity-display_cache_corig: 0 8388608 linear cache_sanity-pool_cdata: 0 8388608 linear cache_sanity-pool_cmeta: 0 16384 linear Reporting is handled now by: https://www.redhat.com/archives/lvm-devel/2015-August/msg00184.html lvchange is not yet fixed - thus keeping bug still open. display seems fixed: [root@bp-01 ~]# lvcreate -n cachepool -L 5G --type cache vg/lv Logical volume vg/lv is now cached. [root@bp-01 ~]# lvs -o name,cache_policy vg LV Cache Policy lv mq Marking verified in the latest rpms. 3.10.0-313.el7.x86_64 lvm2-2.02.129-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 lvm2-libs-2.02.129-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 lvm2-cluster-2.02.129-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 device-mapper-1.02.106-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 device-mapper-libs-1.02.106-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 device-mapper-event-1.02.106-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 device-mapper-event-libs-1.02.106-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 device-mapper-persistent-data-0.5.5-1.el7 BUILT: Thu Aug 13 09:58:10 CDT 2015 cmirror-2.02.129-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 sanlock-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 sanlock-lib-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 lvm2-lockd-2.02.129-2.el7 BUILT: Wed Sep 2 02:51:56 CDT 2015 [root@host-109 ~]# lvs -a -o +devices,cachemode,cache_policy LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices Cachemode Cache Policy display_cache Cwi-a-C--- 4.01g [pool] [display_cache_corig] 0.00 12.99 0.00 display_cache_corig(0) writeback smq [display_cache_corig] owi-aoC--- 4.01g /dev/sdc1(0),/dev/sdd1(0),/dev/sdc2(0) [lvol0_pmspare] ewi------- 12.00m /dev/sdb2(343) [pool] Cwi---C--- 4.01g 0.00 12.99 0.00 pool_cdata(0) writeback smq [pool_cdata] Cwi-ao---- 4.01g /dev/sdb2(0),/dev/sdb1(0),/dev/sdf2(0) [pool_cmeta] ewi-ao---- 12.00m /dev/sdb2(342),/dev/sdb1(342),/dev/sdf2(342) [root@host-109 ~]# lvs --noheadings -o lv_name --select 'cachemode=writeback && cachepolicy=smq' display_cache [root@host-109 ~]# lvs -a -o +devices,cachemode,cachepolicy LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices Cachemode Cache Policy display_cache Cwi-a-C--- 4.01g [pool] [display_cache_corig] 0.00 12.99 0.00 display_cache_corig(0) writeback smq [display_cache_corig] owi-aoC--- 4.01g /dev/sdc1(0),/dev/sdd1(0),/dev/sdc2(0) [lvol0_pmspare] ewi------- 12.00m /dev/sdb2(343) [pool] Cwi---C--- 4.01g 0.00 12.99 0.00 pool_cdata(0) writeback smq [pool_cdata] Cwi-ao---- 4.01g /dev/sdb2(0),/dev/sdb1(0),/dev/sdf2(0) [pool_cmeta] ewi-ao---- 12.00m /dev/sdb2(342),/dev/sdb1(342),/dev/sdf2(342) [root@host-109 ~]# lvchange --cachepolicy cleaner cache_sanity/display_cache Logical volume "display_cache" changed. [root@host-109 ~]# lvs -a -o +devices,cachemode,cachepolicy LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices Cachemode Cache Policy display_cache Cwi-a-C--- 4.01g [pool] [display_cache_corig] 0.00 12.99 0.00 display_cache_corig(0) writeback cleaner [display_cache_corig] owi-aoC--- 4.01g /dev/sdc1(0),/dev/sdd1(0),/dev/sdc2(0) [lvol0_pmspare] ewi------- 12.00m /dev/sdb2(343) [pool] Cwi---C--- 4.01g 0.00 12.99 0.00 pool_cdata(0) writeback cleaner [pool_cdata] Cwi-ao---- 4.01g /dev/sdb2(0),/dev/sdb1(0),/dev/sdf2(0) [pool_cmeta] ewi-ao---- 12.00m /dev/sdb2(342),/dev/sdb1(342),/dev/sdf2(342) Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2147.html |