| Summary: | request for clarification: will raid scrubbing be supported on stacked thin pool volumes | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Corey Marthaler <cmarthal> |
| Component: | lvm2 | Assignee: | Jonathan Earl Brassow <jbrassow> |
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.5 | CC: | agk, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, tlavigne, zkabelac |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.100-6.el6 | Doc Type: | Bug Fix |
| Doc Text: |
No Doc Text required.
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2013-11-21 23:28:11 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
|
Description
Corey Marthaler
2013-09-11 22:39:43 UTC
Yes, it will. I'll try to clean-up that error message, but you should be addressing the underlying sub-LVs directly in this case, like this: lvchange --raidsyncaction repair snapper_thinp/POOL_tdata or lvchange --raidsyncaction repair snapper_thinp/POOL_tmeta The command /could/ determine which sub-LV to perform the action on if only one of them is a RAID LV, but I'll save that kind of intelligence for a later release perhaps. Better error message committed upstream:
commit d97583cfd395e5e31888558361ae9467cea60260
Author: Jonathan Brassow <jbrassow>
Date: Mon Oct 14 15:14:16 2013 -0500
RAID: Better error message when attempting scrubbing op on thinpool LV
Component LVs of a thinpool can be RAID LVs. Users who attempt a
scrubbing operation directly on a thinpool will be prompted to
specify the sub-LV they wish the operation to be performed on. If
neither of the sub-LVs are RAID, then a message telling them that
the operation can only be performed on a RAID LV will be given.
[root@bp-02 lvm2]# lvchange --syncaction check vg/lv Thinpool data or metadata volume must be specified. (e.g. "vg/lv_tdata") [root@bp-02 lvm2]# lvchange --syncaction check vg/lv_tdata [root@bp-02 lvm2]# lvs -a -o +raid_sync_action vg LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert SyncAction lv vg twi-a-tz-- 1.00g 0.00 [lv_tdata] vg rwi-aor--- 1.00g 37.50 check [lv_tdata_rimage_0] vg iwi-aor--- 1.00g [lv_tdata_rimage_1] vg iwi-aor--- 1.00g [lv_tdata_rmeta_0] vg ewi-aor--- 4.00m [lv_tdata_rmeta_1] vg ewi-aor--- 4.00m [lv_tmeta] vg ewi-ao---- 4.00m [lvol0_pmspare] vg ewi------- 4.00m [root@virt-008 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert POOL normal twi-a-tz-- 3.00g 0.00 [POOL_tdata] normal rwi-aor--- 3.00g 100.00 [POOL_tdata_rimage_0] normal iwi-aor--- 3.00g [POOL_tdata_rimage_1] normal iwi-aor--- 3.00g [POOL_tdata_rmeta_0] normal ewi-aor--- 4.00m [POOL_tdata_rmeta_1] normal ewi-aor--- 4.00m [POOL_tmeta] normal ewi-ao---- 4.00m [lvol0_pmspare] normal ewi------- 4.00m lv_root vg_virt008 -wi-ao---- 6.71g lv_swap vg_virt008 -wi-ao---- 816.00m [root@virt-008 ~]# lvchange --syncaction repair normal/POOL Thinpool data or metadata volume must be specified. (e.g. "normal/POOL_tdata") [root@virt-008 ~]# lvchange --syncaction repair normal/POOL_tdata [root@virt-008 ~]# lvs -a LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert POOL normal twi-a-tz-- 3.00g 0.00 [POOL_tdata] normal rwi-aor--- 3.00g 25.65 [POOL_tdata_rimage_0] normal iwi-aor--- 3.00g [POOL_tdata_rimage_1] normal iwi-aor--- 3.00g [POOL_tdata_rmeta_0] normal ewi-aor--- 4.00m [POOL_tdata_rmeta_1] normal ewi-aor--- 4.00m [POOL_tmeta] normal ewi-ao---- 4.00m [lvol0_pmspare] normal ewi------- 4.00m lv_root vg_virt008 -wi-ao---- 6.71g lv_swap vg_virt008 -wi-ao---- 816.00m VERIFIED with lvm2-2.02.100-6.el6.x86_64 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1704.html |