Bug 1007102 - request for clarification: will raid scrubbing be supported on stacked thin pool volumes
Summary: request for clarification: will raid scrubbing be supported on stacked thin p...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.5
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: rc
: ---
Assignee: Jonathan Earl Brassow
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-11 22:39 UTC by Corey Marthaler
Modified: 2013-11-21 23:28 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.100-6.el6
Doc Type: Bug Fix
Doc Text:
No Doc Text required.
Clone Of:
Environment:
Last Closed: 2013-11-21 23:28:11 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1704 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-11-20 21:52:01 UTC

Description Corey Marthaler 2013-09-11 22:39:43 UTC
Description of problem:
[cmarthal@silver ~]$ cat foo
[root@taft-01 ~]# lvs -a -o +devices
  LV                    Attr       LSize  Pool Data%  Cpy%Sync Devices
  POOL                  twi-a-tz--  1.00g        0.00          POOL_tdata(0)
  [POOL_tdata]          rwi-aor---  1.00g               100.00 POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0)
  [POOL_tdata_rimage_0] iwi-aor---  1.00g                      /dev/sdd1(1)
  [POOL_tdata_rimage_1] iwi-aor---  1.00g                      /dev/sdc1(1)
  [POOL_tdata_rmeta_0]  ewi-aor---  4.00m                      /dev/sdd1(0)
  [POOL_tdata_rmeta_1]  ewi-aor---  4.00m                      /dev/sdc1(0)
  [POOL_tmeta]          ewi-aor---  1.00g               100.00 POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)
  [POOL_tmeta_rimage_0] iwi-aor---  1.00g                      /dev/sdd1(258)
  [POOL_tmeta_rimage_1] iwi-aor---  1.00g                      /dev/sdc1(258)
  [POOL_tmeta_rmeta_0]  ewi-aor---  4.00m                      /dev/sdd1(257)
  [POOL_tmeta_rmeta_1]  ewi-aor---  4.00m                      /dev/sdc1(257)
  [lvol0_pmspare]       ewi-------  1.00g                      /dev/sdd1(514)
  origin                Vwi-a-tz--  1.00g POOL   0.00
  other1                Vwi-a-tz--  1.00g POOL   0.00
  other2                Vwi-a-tz--  1.00g POOL   0.00
  other3                Vwi-a-tz--  1.00g POOL   0.00
  other4                Vwi-a-tz--  1.00g POOL   0.00
  other5                Vwi-a-tz--  1.00g POOL   0.00

[root@taft-01 ~]# lvchange --syncaction repair snapper_thinp/POOL
  Failed to retrieve status of snapper_thinp/POOL


Version-Release number of selected component (if applicable):
2.6.32-410.el6.x86_64
lvm2-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
lvm2-libs-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
lvm2-cluster-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
udev-147-2.48.el6    BUILT: Fri Aug  9 06:09:50 CDT 2013
device-mapper-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
device-mapper-libs-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
device-mapper-event-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
device-mapper-event-libs-1.02.79-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013
cmirror-2.02.100-2.el6    BUILT: Wed Aug 14 10:23:33 CDT 2013

Comment 2 Jonathan Earl Brassow 2013-10-11 03:06:42 UTC
Yes, it will.

I'll try to clean-up that error message, but you should be addressing the underlying sub-LVs directly in this case, like this:

lvchange --raidsyncaction repair snapper_thinp/POOL_tdata
or
lvchange --raidsyncaction repair snapper_thinp/POOL_tmeta

The command /could/ determine which sub-LV to perform the action on if only one of them is a RAID LV, but I'll save that kind of intelligence for a later release perhaps.

Comment 3 Jonathan Earl Brassow 2013-10-14 20:17:52 UTC
Better error message committed upstream:

commit d97583cfd395e5e31888558361ae9467cea60260
Author: Jonathan Brassow <jbrassow>
Date:   Mon Oct 14 15:14:16 2013 -0500

    RAID: Better error message when attempting scrubbing op on thinpool LV
    
    Component LVs of a thinpool can be RAID LVs.  Users who attempt a
    scrubbing operation directly on a thinpool will be prompted to
    specify the sub-LV they wish the operation to be performed on.  If
    neither of the sub-LVs are RAID, then a message telling them that
    the operation can only be performed on a RAID LV will be given.

Comment 5 Jonathan Earl Brassow 2013-10-14 22:36:00 UTC
[root@bp-02 lvm2]# lvchange --syncaction check vg/lv
  Thinpool data or metadata volume must be specified. (e.g. "vg/lv_tdata")
[root@bp-02 lvm2]# lvchange --syncaction check vg/lv_tdata
[root@bp-02 lvm2]# lvs -a -o +raid_sync_action vg
  LV                  VG   Attr       LSize Pool Origin Data%  Move Log Cpy%Sync Convert SyncAction
  lv                  vg   twi-a-tz-- 1.00g               0.00                                     
  [lv_tdata]          vg   rwi-aor--- 1.00g                                37.50         check     
  [lv_tdata_rimage_0] vg   iwi-aor--- 1.00g                                                        
  [lv_tdata_rimage_1] vg   iwi-aor--- 1.00g                                                        
  [lv_tdata_rmeta_0]  vg   ewi-aor--- 4.00m                                                        
  [lv_tdata_rmeta_1]  vg   ewi-aor--- 4.00m                                                        
  [lv_tmeta]          vg   ewi-ao---- 4.00m                                                        
  [lvol0_pmspare]     vg   ewi------- 4.00m

Comment 7 Nenad Peric 2013-10-22 07:51:15 UTC
[root@virt-008 ~]# lvs -a
  LV                    VG         Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  POOL                  normal     twi-a-tz--   3.00g               0.00                          
  [POOL_tdata]          normal     rwi-aor---   3.00g                               100.00        
  [POOL_tdata_rimage_0] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rimage_1] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rmeta_0]  normal     ewi-aor---   4.00m                                             
  [POOL_tdata_rmeta_1]  normal     ewi-aor---   4.00m                                             
  [POOL_tmeta]          normal     ewi-ao----   4.00m                                             
  [lvol0_pmspare]       normal     ewi-------   4.00m                                             
  lv_root               vg_virt008 -wi-ao----   6.71g                                             
  lv_swap               vg_virt008 -wi-ao---- 816.00m                                             
[root@virt-008 ~]# lvchange --syncaction repair normal/POOL
  Thinpool data or metadata volume must be specified. (e.g. "normal/POOL_tdata")
[root@virt-008 ~]# lvchange --syncaction repair normal/POOL_tdata


[root@virt-008 ~]# lvs -a
  LV                    VG         Attr       LSize   Pool Origin Data%  Move Log Cpy%Sync Convert
  POOL                  normal     twi-a-tz--   3.00g               0.00                          
  [POOL_tdata]          normal     rwi-aor---   3.00g                                25.65        
  [POOL_tdata_rimage_0] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rimage_1] normal     iwi-aor---   3.00g                                             
  [POOL_tdata_rmeta_0]  normal     ewi-aor---   4.00m                                             
  [POOL_tdata_rmeta_1]  normal     ewi-aor---   4.00m                                             
  [POOL_tmeta]          normal     ewi-ao----   4.00m                                             
  [lvol0_pmspare]       normal     ewi-------   4.00m                                             
  lv_root               vg_virt008 -wi-ao----   6.71g                                             
  lv_swap               vg_virt008 -wi-ao---- 816.00m                    


VERIFIED with lvm2-2.02.100-6.el6.x86_64

Comment 8 errata-xmlrpc 2013-11-21 23:28:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1704.html


Note You need to log in before you can comment on or make changes to this bug.