RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1169495 - RFE: allow raid scrubbing on cache origin raid volumes
Summary: RFE: allow raid scrubbing on cache origin raid volumes
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-01 20:10 UTC by Corey Marthaler
Modified: 2021-09-03 12:36 UTC (History)
8 users (show)

Fixed In Version: lvm2-2.02.165-1.el7
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-11-04 04:08:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1445 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description Corey Marthaler 2014-12-01 20:10:42 UTC
Description of problem:
[root@host-119 ~]# lvs -a -o +devices
 LV                             Attr       LSize Pool            Origin          Data%  Meta% Cpy%Sync Devices
 corigin                        Cwi-a-C--- 4.00g [display_cache] [corigin_corig] 0.02   3.47  0.00     corigin_corig(0)
 [corigin_corig]                rwi-aoC--- 4.00g                                              100.00   corigin_corig_rimage_0(0),corigin_corig_rimage_1(0)
 [corigin_corig_rimage_0]       iwi-aor--- 4.00g                                                       /dev/sdf1(1)
 [corigin_corig_rimage_1]       iwi-aor--- 4.00g                                                       /dev/sdb1(1)
 [corigin_corig_rmeta_0]        ewi-aor--- 4.00m                                                       /dev/sdf1(0)
 [corigin_corig_rmeta_1]        ewi-aor--- 4.00m                                                       /dev/sdb1(0)
 [display_cache]                Cwi---C--- 2.00g                                 0.02   3.47  0.00     display_cache_cdata(0)
 [display_cache_cdata]          Cwi-aor--- 2.00g                                              100.00   display_cache_cdata_rimage_0(0),display_cache_cdata_rimage_1(0)
 [display_cache_cdata_rimage_0] iwi-aor--- 2.00g                                                       /dev/sdf2(1)
 [display_cache_cdata_rimage_1] iwi-aor--- 2.00g                                                       /dev/sdc2(1)
 [display_cache_cdata_rmeta_0]  ewi-aor--- 4.00m                                                       /dev/sdf2(0)
 [display_cache_cdata_rmeta_1]  ewi-aor--- 4.00m                                                       /dev/sdc2(0)
 [display_cache_cmeta]          ewi-aor--- 8.00m                                              100.00   display_cache_cmeta_rimage_0(0),display_cache_cmeta_rimage_1(0)
 [display_cache_cmeta_rimage_0] iwi-aor--- 8.00m                                                       /dev/sdf2(514)
 [display_cache_cmeta_rimage_1] iwi-aor--- 8.00m                                                       /dev/sdc2(514)
 [display_cache_cmeta_rmeta_0]  ewi-aor--- 4.00m                                                       /dev/sdf2(513)
 [display_cache_cmeta_rmeta_1]  ewi-aor--- 4.00m                                                       /dev/sdc2(513)
 [lvol0_pmspare]                ewi------- 8.00m                                                       /dev/sdd2(0)

[root@host-119 ~]# lvchange --syncaction repair cache_sanity/corigin_corig
  Unable to change internal LV corigin_corig directly

[root@host-119 ~]# lvchange --syncaction check cache_sanity/corigin_corig
  Unable to change internal LV corigin_corig directly


Version-Release number of selected component (if applicable):
3.10.0-206.el7.x86_64
lvm2-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
lvm2-libs-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
lvm2-cluster-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-libs-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-event-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-event-libs-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-persistent-data-0.4.1-2.el7    BUILT: Wed Nov 12 12:39:46 CST 2014
cmirror-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014

Comment 3 Jonathan Earl Brassow 2016-08-31 23:55:42 UTC
I've been running a couple checks, and just to be clear, you CAN do:
# lvchange --syncaction check vg/cpool_cmeta
and
# lvchange --syncaction check vg/cpool_cdata

So the cache-pool sub-LVs can be acted on, but the origin (if RAID) cannot:
# lvchange --syncaction check vg/lv_corig
  Unable to change internal LV vg/lv_corig directly.

Comment 4 Jonathan Earl Brassow 2016-09-01 13:35:16 UTC
Seems like creating a 'lv_is_cache_origin(lv)' macro and adding it to lvchange.c:1052
        if ((lv_is_thin_pool_data(lv) || lv_is_thin_pool_metadata(lv) ||
             lv_is_cache_pool_data(lv) || lv_is_cache_pool_metadata(lv)) &&
            !arg_is_set(cmd, activate_ARG) &&
            !arg_is_set(cmd, permission_ARG) &&
            !arg_is_set(cmd, setactivationskip_ARG))
	    /* Rest can be changed for stacked thin pool meta/data volumes */
            ;
        else if (!lv_is_visible(lv) && !lv_is_virtual_origin(lv)) {
                log_error("Unable to change internal LV %s directly. (0x%x)",
                          display_lvname(lv), lv->status);
                return ECMD_FAILED;
	}
would do the trick.  However, we need to ensure that we don't run into any issues involving clvmd, etc.  (I'm not sure how much vetting has been done in this regard for the other sub-LVs.)

Comment 7 Corey Marthaler 2016-09-07 20:13:32 UTC
Verified that scrubbing is now allowed on a raid cache origin.

3.10.0-501.el7.x86_64
lvm2-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
lvm2-libs-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
lvm2-cluster-2.02.165-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-libs-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-event-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-event-libs-1.02.134-1.el7    BUILT: Wed Sep  7 11:04:22 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016




# rhel7.2

[root@host-128 ~]# lvs -a -o +devices
  LV                             VG            Attr       LSize   Pool   Origin                Data%  Meta% Cpy%Sync Devices
  display_cache                  cache_sanity  Cwi-a-C---   4.00g [pool] [display_cache_corig] 0.00   8.66  100.00   display_cache_corig(0)
  [display_cache_corig]          cache_sanity  rwi-aoC---   4.00g                                           100.00   display_cache_corig_rimage_0(0),display_cache_corig_rimage_1(0)
  [display_cache_corig_rimage_0] cache_sanity  iwi-aor---   4.00g                                                    /dev/sda1(1)
  [display_cache_corig_rimage_1] cache_sanity  iwi-aor---   4.00g                                                    /dev/sda2(1)
  [display_cache_corig_rmeta_0]  cache_sanity  ewi-aor---   4.00m                                                    /dev/sda1(0)
  [display_cache_corig_rmeta_1]  cache_sanity  ewi-aor---   4.00m                                                    /dev/sda2(0)
  [lvol0_pmspare]                cache_sanity  ewi-------  12.00m                                                    /dev/sda1(2054)
  [pool]                         cache_sanity  Cwi---C---   4.00g                              0.00   8.66  100.00   pool_cdata(0)
  [pool_cdata]                   cache_sanity  Cwi-aor---   4.00g                                           100.00   pool_cdata_rimage_0(0),pool_cdata_rimage_1(0)
  [pool_cdata_rimage_0]          cache_sanity  iwi-aor---   4.00g                                                    /dev/sda1(1026)
  [pool_cdata_rimage_1]          cache_sanity  iwi-aor---   4.00g                                                    /dev/sda2(1026)
  [pool_cdata_rmeta_0]           cache_sanity  ewi-aor---   4.00m                                                    /dev/sda1(1025)
  [pool_cdata_rmeta_1]           cache_sanity  ewi-aor---   4.00m                                                    /dev/sda2(1025)
  [pool_cmeta]                   cache_sanity  ewi-aor---  12.00m                                           100.00   pool_cmeta_rimage_0(0),pool_cmeta_rimage_1(0)
  [pool_cmeta_rimage_0]          cache_sanity  iwi-aor---  12.00m                                                    /dev/sda1(2051)
  [pool_cmeta_rimage_1]          cache_sanity  iwi-aor---  12.00m                                                    /dev/sda2(2051)
  [pool_cmeta_rmeta_0]           cache_sanity  ewi-aor---   4.00m                                                    /dev/sda1(2050)
  [pool_cmeta_rmeta_1]           cache_sanity  ewi-aor---   4.00m                                                    /dev/sda2(2050)

[root@host-128 ~]# lvchange --syncaction repair cache_sanity/display_cache_corig
  Unable to change internal LV display_cache_corig directly



# rhel7.3

[root@host-118 ~]# lvs -a -o +devices
  LV                             VG            Attr       LSize   Pool   Origin Data%  Meta% Cpy%Sync Devices
  display_cache                  cache_sanity  Cwi-a-C---   4.00g [pool]        0.00         100.00   display_cache_corig(0)
  [display_cache_corig]          cache_sanity  rwi-aoC---   4.00g                            100.00   display_cache_corig_rimage_0(0),display_cache_corig_rimage_1(0)
  [display_cache_corig_rimage_0] cache_sanity  iwi-aor---   4.00g                                     /dev/sdf1(1)
  [display_cache_corig_rimage_1] cache_sanity  iwi-aor---   4.00g                                     /dev/sde2(1)
  [display_cache_corig_rmeta_0]  cache_sanity  ewi-aor---   4.00m                                     /dev/sdf1(0)
  [display_cache_corig_rmeta_1]  cache_sanity  ewi-aor---   4.00m                                     /dev/sde2(0)
  [lvol0_pmspare]                cache_sanity  ewi-------  12.00m                                     /dev/sdf2(1029)
  [pool]                         cache_sanity  Cwi---C---   4.00g                                     pool_cdata(0)
  [pool_cdata]                   cache_sanity  Cwi-aor---   4.00g                            100.00   pool_cdata_rimage_0(0),pool_cdata_rimage_1(0)
  [pool_cdata_rimage_0]          cache_sanity  iwi-aor---   4.00g                                     /dev/sdf2(1)
  [pool_cdata_rimage_1]          cache_sanity  iwi-aor---   4.00g                                     /dev/sdc1(1)
  [pool_cdata_rmeta_0]           cache_sanity  ewi-aor---   4.00m                                     /dev/sdf2(0)
  [pool_cdata_rmeta_1]           cache_sanity  ewi-aor---   4.00m                                     /dev/sdc1(0)
  [pool_cmeta]                   cache_sanity  ewi-aor---  12.00m                            100.00   pool_cmeta_rimage_0(0),pool_cmeta_rimage_1(0)
  [pool_cmeta_rimage_0]          cache_sanity  iwi-aor---  12.00m                                     /dev/sdf2(1026)
  [pool_cmeta_rimage_1]          cache_sanity  iwi-aor---  12.00m                                     /dev/sdc1(1026)
  [pool_cmeta_rmeta_0]           cache_sanity  ewi-aor---   4.00m                                     /dev/sdf2(1025)
  [pool_cmeta_rmeta_1]           cache_sanity  ewi-aor---   4.00m                                     /dev/sdc1(1025)

[root@host-118 ~]# lvchange --syncaction repair cache_sanity/display_cache_corig

Sep  7 15:06:07 host-118 kernel: md: requested-resync of RAID array mdX
Sep  7 15:06:07 host-118 kernel: md: minimum _guaranteed_  speed: 1000 KB/sec/disk.
Sep  7 15:06:07 host-118 kernel: md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for requested-resync.
Sep  7 15:06:07 host-118 kernel: md: using 128k window, over a total of 4194304k.

Comment 9 errata-xmlrpc 2016-11-04 04:08:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.