RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1169500 - RFE: allow raid scrubbing on cache pools that have not yet been used with a cache origin device
Summary: RFE: allow raid scrubbing on cache pools that have not yet been used with a c...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.1
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
Milan Navratil
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-01 20:22 UTC by Corey Marthaler
Modified: 2021-09-03 12:40 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-05-10 15:18:43 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2014-12-01 20:22:50 UTC
Description of problem:
This is an RFE for comment #8 of bug 1086442. 

[root@host-119 ~]# lvcreate --type raid1 -m 1 -L 2G -n pool cache_sanity /dev/sde2 /dev/sda2
  Logical volume "pool" created.
[root@host-119 ~]# lvcreate --type raid1 -m 1 -L 8M -n pool_meta cache_sanity /dev/sde2 /dev/sda2
  Logical volume "pool_meta" created.

[root@host-119 ~]# lvs -a -o +devices
  LV                   Attr       LSize   Cpy%Sync Devices
  pool                 rwi-a-r---   2.00g 100.00   pool_rimage_0(0),pool_rimage_1(0)
  pool_meta            rwi-a-r---   8.00m 100.00   pool_meta_rimage_0(0),pool_meta_rimage_1(0)
  [pool_meta_rimage_0] iwi-aor---   8.00m          /dev/sde2(514)
  [pool_meta_rimage_1] iwi-aor---   8.00m          /dev/sda2(514)
  [pool_meta_rmeta_0]  ewi-aor---   4.00m          /dev/sde2(513)
  [pool_meta_rmeta_1]  ewi-aor---   4.00m          /dev/sda2(513)
  [pool_rimage_0]      iwi-aor---   2.00g          /dev/sde2(1)
  [pool_rimage_1]      iwi-aor---   2.00g          /dev/sda2(1)
  [pool_rmeta_0]       ewi-aor---   4.00m          /dev/sde2(0)
  [pool_rmeta_1]       ewi-aor---   4.00m          /dev/sda2(0)

[root@host-119 ~]# lvconvert --yes --type cache-pool --poolmetadata cache_sanity/pool_meta cache_sanity/pool
  WARNING: Converting logical volume cache_sanity/pool and cache_sanity/pool_meta to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/pool to cache pool.

[root@host-119 ~]# lvs -a -o +devices
  LV                    Attr       LSize   Cpy%Sync Devices
  [lvol0_pmspare]       ewi-------   8.00m          /dev/sda1(0)
  pool                  Cwi---C---   2.00g          pool_cdata(0)
  [pool_cdata]          Cwi---r---   2.00g          pool_cdata_rimage_0(0),pool_cdata_rimage_1(0)
  [pool_cdata_rimage_0] Iwi---r---   2.00g          /dev/sde2(1)
  [pool_cdata_rimage_1] Iwi---r---   2.00g          /dev/sda2(1)
  [pool_cdata_rmeta_0]  ewi---r---   4.00m          /dev/sde2(0)
  [pool_cdata_rmeta_1]  ewi---r---   4.00m          /dev/sda2(0)
  [pool_cmeta]          ewi---r---   8.00m          pool_cmeta_rimage_0(0),pool_cmeta_rimage_1(0)
  [pool_cmeta_rimage_0] Iwi---r---   8.00m          /dev/sde2(514)
  [pool_cmeta_rimage_1] Iwi---r---   8.00m          /dev/sda2(514)
  [pool_cmeta_rmeta_0]  ewi---r---   4.00m          /dev/sde2(513)
  [pool_cmeta_rmeta_1]  ewi---r---   4.00m          /dev/sda2(513)

[root@host-119 ~]# lvchange --syncaction repair cache_sanity/pool_cdata
  Unable to send message to an inactive logical volume.
[root@host-119 ~]# lvchange --syncaction repair cache_sanity/pool_cmeta
  Unable to send message to an inactive logical volume.


Version-Release number of selected component (if applicable):
3.10.0-206.el7.x86_64

lvm2-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
lvm2-libs-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
lvm2-cluster-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-libs-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-event-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-event-libs-1.02.92-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014
device-mapper-persistent-data-0.4.1-2.el7    BUILT: Wed Nov 12 12:39:46 CST 2014
cmirror-2.02.114-2.el7    BUILT: Mon Dec  1 10:57:14 CST 2014

Comment 3 Jonathan Earl Brassow 2016-09-01 13:08:06 UTC
Can't do this because the cache-pool cannot be activated.  RAID logical volumes (even under a pool) need to be active in order to start a sync operation.

I'm not sure this is a big problem because an unused cache has no data in it.  I suppose one could argue that an inconsistency in the RAID - even if it never affects user data - can be a pain when scrubbing later because it will encounter a mismatch.  That mismatch will cause concern that the data may be in trouble even when it isn't.

I am inclined to close this bug WONTFIX.

Comment 4 Jonathan Earl Brassow 2016-09-01 13:21:56 UTC
I'm pushing this bug to 7.4 to allow more time for discussion about what could be done.  This is a pretty minor issue.

Comment 5 Jonathan Earl Brassow 2017-05-10 15:18:43 UTC
There is no need to scrub an unused cache-pool - it doesn't have any useful contents.

If users /really/ need to scrub an unused cache-pool (because they want to make sure even unused contents are in-sync before using), they can remove it and create a new one.


Note You need to log in before you can comment on or make changes to this bug.