Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1007102

Summary: request for clarification: will raid scrubbing be supported on stacked thin pool volumes
Product: Red Hat Enterprise Linux 6 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Jonathan Earl Brassow <jbrassow>
Status: CLOSED ERRATA QA Contact: Cluster QE <mspqa-list>
Severity: low Docs Contact:
Priority: unspecified    
Version: 6.5CC: agk, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, tlavigne, zkabelac
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.02.100-6.el6 Doc Type: Bug Fix
Doc Text:
No Doc Text required.
Story Points: ---
Clone Of: Environment:
Last Closed: 2013-11-21 23:28:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2013-09-11 22:39:43 UTC
Description of problem:
[cmarthal@silver ~]$ cat foo
[root@taft-01 ~]# lvs -a -o +devices
  LV                    Attr       LSize  Pool Data%  Cpy%Sync Devices
  POOL                  twi-a-tz--  1.00g        0.00          POOL_tdata(0)
  [POOL_tdata]          rwi-aor---  1.00g               100.00 POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0)
  [POOL_tdata_rimage_0] iwi-aor---  1.00g                      /dev/sdd1(1)
  [POOL_tdata_rimage_1] iwi-aor---  1.00g                      /dev/sdc1(1)
  [POOL_tdata_rmeta_0]  ewi-aor---  4.00m                      /dev/sdd1(0)
  [POOL_tdata_rmeta_1]  ewi-aor---  4.00m                      /dev/sdc1(0)
  [POOL_tmeta]          ewi-aor---  1.00g               100.00 POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)
  [POOL_tmeta_rimage_0] iwi-aor---  1.00g                      /dev/sdd1(258)
  [POOL_tmeta_rimage_1] iwi-aor---  1.00g                      /dev/sdc1(258)
  [POOL_tmeta_rmeta_0]  ewi-aor---  4.00m                      /dev/sdd1(257)
  [POOL_tmeta_rmeta_1]  ewi-aor---  4.00m                      /dev/sdc1(257)
  [lvol0_pmspare]       ewi-------  1.00g                      /dev/sdd1(514)
  origin                Vwi-a-tz--  1.00g POOL   0.00
  other1                Vwi-a-tz--  1.00g POOL   0.00
  other2                Vwi-a-tz--  1.00g POOL   0.00
  other3                Vwi-a-tz--  1.00g POOL   0.00
  other4                Vwi-a-tz--  1.00g POOL   0.00
  other5                Vwi-a-tz--  1.00g POOL   0.00

[root@taft-01 ~]# lvchange --syncaction repair snapper_thinp/POOL
  Failed to retrieve status of snapper_thinp/POOL


Version-Release number of selected component (if applicable):
2.6.32-410.el6.x86_64
lvm2-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
lvm2-libs-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
lvm2-cluster-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
udev-147-2.48.el6    BUILT: Fri Aug  9 06:09:50 CDT 2013
device-mapper-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
device-mapper-libs-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
device-mapper-event-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
device-mapper-event-libs-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
cmirror-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013

Comment 2 Jonathan Earl Brassow 2013-10-11 03:06:42 UTC
Yes, it will.

I'll try to clean-up that error message, but you should be addressing the underlying sub-LVs directly in this case, like this:

lvchange --raidsyncaction repair snapper_thinp/POOL_tdata
or
lvchange --raidsyncaction repair snapper_thinp/POOL_tmeta

The command /could/ determine which sub-LV to perform the action on if only one of them is a RAID LV, but I'll save that kind of intelligence for a later release perhaps.

Comment 3 Jonathan Earl Brassow 2013-10-14 20:17:52 UTC
Better error message committed upstream:

commit d97583cfd395e5e31888558361ae9467cea60260
Author: Jonathan Brassow <jbrassow>
Date:   Mon Oct 14 15:14:16 2013 -0500

    RAID: Better error message when attempting scrubbing op on thinpool LV
    
    Component LVs of a thinpool can be RAID LVs.  Users who attempt a
    scrubbing operation directly on a thinpool will be prompted to
    specify the sub-LV they wish the operation to be performed on.  If
    neither of the sub-LVs are RAID, then a message telling them that
    the operation can only be performed on a RAID LV will be given.

Comment 5 Jonathan Earl Brassow 2013-10-14 22:36:00 UTC
[root@bp-02 lvm2]# lvchange --syncaction check vg/lv
  Thinpool data or metadata volume must be specified. (e.g. "vg/lv_tdata")
[root@bp-02 lvm2]# lvchange --syncaction check vg/lv_tdata
[root@bp-02 lvm2]# lvs -a -o +raid_sync_action vg
  LV                  VG   Attr       LSize Pool Origin Data%  Move Log Cpy%Sync Convert SyncAction
  lv                  vg   twi-a-tz-- 1.00g               0.00                                     
  [lv_tdata]          vg   rwi-aor--- 1.00g                                37.50         check     
  [lv_tdata_rimage_0] vg   iwi-aor--- 1.00g                                                        
  [lv_tdata_rimage_1] vg   iwi-aor--- 1.00g                                                        
  [lv_tdata_rmeta_0]  vg   ewi-aor--- 4.00m                                                        
  [lv_tdata_rmeta_1]  vg   ewi-aor--- 4.00m                                                        
  [lv_tmeta]          vg   ewi-ao---- 4.00m                                                        
  [lvol0_pmspare]     vg   ewi------- 4.00m

Comment 7 Nenad Peric 2013-10-22 07:51:15 UTC
[root@virt-008 ~]# lvs -a
  LV                    VG         Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  POOL                  normal     twi-a-tz--   3.00g               0.00                          
  [POOL_tdata]          normal     rwi-aor---   3.00g                               100.00        
  [POOL_tdata_rimage_0] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rimage_1] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rmeta_0]  normal     ewi-aor---   4.00m                                             
  [POOL_tdata_rmeta_1]  normal     ewi-aor---   4.00m                                             
  [POOL_tmeta]          normal     ewi-ao----   4.00m                                             
  [lvol0_pmspare]       normal     ewi-------   4.00m                                             
  lv_root               vg_virt008 -wi-ao----   6.71g                                             
  lv_swap               vg_virt008 -wi-ao---- 816.00m                                             
[root@virt-008 ~]# lvchange --syncaction repair normal/POOL
  Thinpool data or metadata volume must be specified. (e.g. "normal/POOL_tdata")
[root@virt-008 ~]# lvchange --syncaction repair normal/POOL_tdata


[root@virt-008 ~]# lvs -a
  LV                    VG         Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  POOL                  normal     twi-a-tz--   3.00g               0.00                          
  [POOL_tdata]          normal     rwi-aor---   3.00g                                25.65        
  [POOL_tdata_rimage_0] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rimage_1] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rmeta_0]  normal     ewi-aor---   4.00m                                             
  [POOL_tdata_rmeta_1]  normal     ewi-aor---   4.00m                                             
  [POOL_tmeta]          normal     ewi-ao----   4.00m                                             
  [lvol0_pmspare]       normal     ewi-------   4.00m                                             
  lv_root               vg_virt008 -wi-ao----   6.71g                                             
  lv_swap               vg_virt008 -wi-ao---- 816.00m                    


VERIFIED with lvm2-2.02.100-6.el6.x86_64

Comment 8 errata-xmlrpc 2013-11-21 23:28:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1704.html