Bug 1169495
| Summary: | RFE: allow raid scrubbing on cache origin raid volumes | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> |
| lvm2 sub component: | Cache Logical Volumes | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, msnitzer, mthacker, prajnoha, tlavigne, zkabelac |
| Version: | 7.1 | Keywords: | FutureFeature |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.165-1.el7 | Doc Type: | Enhancement |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-11-04 04:08:18 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Corey Marthaler
2014-12-01 20:10:42 UTC
I've been running a couple checks, and just to be clear, you CAN do: # lvchange --syncaction check vg/cpool_cmeta and # lvchange --syncaction check vg/cpool_cdata So the cache-pool sub-LVs can be acted on, but the origin (if RAID) cannot: # lvchange --syncaction check vg/lv_corig Unable to change internal LV vg/lv_corig directly. Seems like creating a 'lv_is_cache_origin(lv)' macro and adding it to lvchange.c:1052
if ((lv_is_thin_pool_data(lv) || lv_is_thin_pool_metadata(lv) ||
lv_is_cache_pool_data(lv) || lv_is_cache_pool_metadata(lv)) &&
!arg_is_set(cmd, activate_ARG) &&
!arg_is_set(cmd, permission_ARG) &&
!arg_is_set(cmd, setactivationskip_ARG))
/* Rest can be changed for stacked thin pool meta/data volumes */
;
else if (!lv_is_visible(lv) && !lv_is_virtual_origin(lv)) {
log_error("Unable to change internal LV %s directly. (0x%x)",
display_lvname(lv), lv->status);
return ECMD_FAILED;
}
would do the trick. However, we need to ensure that we don't run into any issues involving clvmd, etc. (I'm not sure how much vetting has been done in this regard for the other sub-LVs.)
Verified that scrubbing is now allowed on a raid cache origin. 3.10.0-501.el7.x86_64 lvm2-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 lvm2-libs-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 lvm2-cluster-2.02.165-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-libs-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-event-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-event-libs-1.02.134-1.el7 BUILT: Wed Sep 7 11:04:22 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 # rhel7.2 [root@host-128 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices display_cache cache_sanity Cwi-a-C--- 4.00g [pool] [display_cache_corig] 0.00 8.66 100.00 display_cache_corig(0) [display_cache_corig] cache_sanity rwi-aoC--- 4.00g 100.00 display_cache_corig_rimage_0(0),display_cache_corig_rimage_1(0) [display_cache_corig_rimage_0] cache_sanity iwi-aor--- 4.00g /dev/sda1(1) [display_cache_corig_rimage_1] cache_sanity iwi-aor--- 4.00g /dev/sda2(1) [display_cache_corig_rmeta_0] cache_sanity ewi-aor--- 4.00m /dev/sda1(0) [display_cache_corig_rmeta_1] cache_sanity ewi-aor--- 4.00m /dev/sda2(0) [lvol0_pmspare] cache_sanity ewi------- 12.00m /dev/sda1(2054) [pool] cache_sanity Cwi---C--- 4.00g 0.00 8.66 100.00 pool_cdata(0) [pool_cdata] cache_sanity Cwi-aor--- 4.00g 100.00 pool_cdata_rimage_0(0),pool_cdata_rimage_1(0) [pool_cdata_rimage_0] cache_sanity iwi-aor--- 4.00g /dev/sda1(1026) [pool_cdata_rimage_1] cache_sanity iwi-aor--- 4.00g /dev/sda2(1026) [pool_cdata_rmeta_0] cache_sanity ewi-aor--- 4.00m /dev/sda1(1025) [pool_cdata_rmeta_1] cache_sanity ewi-aor--- 4.00m /dev/sda2(1025) [pool_cmeta] cache_sanity ewi-aor--- 12.00m 100.00 pool_cmeta_rimage_0(0),pool_cmeta_rimage_1(0) [pool_cmeta_rimage_0] cache_sanity iwi-aor--- 12.00m /dev/sda1(2051) [pool_cmeta_rimage_1] cache_sanity iwi-aor--- 12.00m /dev/sda2(2051) [pool_cmeta_rmeta_0] cache_sanity ewi-aor--- 4.00m /dev/sda1(2050) [pool_cmeta_rmeta_1] cache_sanity ewi-aor--- 4.00m /dev/sda2(2050) [root@host-128 ~]# lvchange --syncaction repair cache_sanity/display_cache_corig Unable to change internal LV display_cache_corig directly # rhel7.3 [root@host-118 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices display_cache cache_sanity Cwi-a-C--- 4.00g [pool] 0.00 100.00 display_cache_corig(0) [display_cache_corig] cache_sanity rwi-aoC--- 4.00g 100.00 display_cache_corig_rimage_0(0),display_cache_corig_rimage_1(0) [display_cache_corig_rimage_0] cache_sanity iwi-aor--- 4.00g /dev/sdf1(1) [display_cache_corig_rimage_1] cache_sanity iwi-aor--- 4.00g /dev/sde2(1) [display_cache_corig_rmeta_0] cache_sanity ewi-aor--- 4.00m /dev/sdf1(0) [display_cache_corig_rmeta_1] cache_sanity ewi-aor--- 4.00m /dev/sde2(0) [lvol0_pmspare] cache_sanity ewi------- 12.00m /dev/sdf2(1029) [pool] cache_sanity Cwi---C--- 4.00g pool_cdata(0) [pool_cdata] cache_sanity Cwi-aor--- 4.00g 100.00 pool_cdata_rimage_0(0),pool_cdata_rimage_1(0) [pool_cdata_rimage_0] cache_sanity iwi-aor--- 4.00g /dev/sdf2(1) [pool_cdata_rimage_1] cache_sanity iwi-aor--- 4.00g /dev/sdc1(1) [pool_cdata_rmeta_0] cache_sanity ewi-aor--- 4.00m /dev/sdf2(0) [pool_cdata_rmeta_1] cache_sanity ewi-aor--- 4.00m /dev/sdc1(0) [pool_cmeta] cache_sanity ewi-aor--- 12.00m 100.00 pool_cmeta_rimage_0(0),pool_cmeta_rimage_1(0) [pool_cmeta_rimage_0] cache_sanity iwi-aor--- 12.00m /dev/sdf2(1026) [pool_cmeta_rimage_1] cache_sanity iwi-aor--- 12.00m /dev/sdc1(1026) [pool_cmeta_rmeta_0] cache_sanity ewi-aor--- 4.00m /dev/sdf2(1025) [pool_cmeta_rmeta_1] cache_sanity ewi-aor--- 4.00m /dev/sdc1(1025) [root@host-118 ~]# lvchange --syncaction repair cache_sanity/display_cache_corig Sep 7 15:06:07 host-118 kernel: md: requested-resync of RAID array mdX Sep 7 15:06:07 host-118 kernel: md: minimum _guaranteed_ speed: 1000 KB/sec/disk. Sep 7 15:06:07 host-118 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for requested-resync. Sep 7 15:06:07 host-118 kernel: md: using 128k window, over a total of 4194304k. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html |