Bug 839796
Summary: | pvmove is inconsistent and gives misleading message for raid | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 6 | Reporter: | benscott |
Component: | lvm2 | Assignee: | Jonathan Earl Brassow <jbrassow> |
Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 6.5 | CC: | agk, cmarthal, dwysocha, heinzm, jbrassow, msnitzer, nperic, prajnoha, prockai, thornber, zkabelac |
Target Milestone: | rc | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | lvm2-2.02.98-4.el6 | Doc Type: | Bug Fix |
Doc Text: |
'pvmove' has been disallowed from operating on RAID logical volumes due to incorrect handling of their sub-LVs. If it is necessary to move a RAID logical volume's components from one device to another, 'lvconvert --replace <old_pv> <vg>/<lv> <new_pv>' should be used.
|
Story Points: | --- |
Clone Of: | Environment: |
lvs --version
LVM version: 2.02.96(2)-cvs (2012-03-06)
Library version: 1.02.75-cvs (2012-03-06)
Driver version: 4.22.0
|
|
Last Closed: | 2013-02-21 08:11:36 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 886216 |
Comment 2
benscott
2012-07-12 21:59:38 UTC
The misleading message part has come up before. :) https://bugzilla.redhat.com/show_bug.cgi?id=500899#c6 Well, from its point of view, it didn't move lvol0 at all - it moved the lvol0_* volumes! So the output you quote is perfectly logical! We'll sort out what the capabilities of these commands really are now and get them fixed. Corey: chicken and egg:) The purpose of the bugzilla *is* to work out what the correct behaviour of the tools should be in all interactions between pvmove and the different types of raid devices, and then to implement that. If you look at the device-mapper mapping tables for the result of the RAID LV after the move, you find it horribly disfigured. This should not be allowed. I am disallowing 'pvmove' of RAID LVs for now. The feature can be requested for RHEL6.5+. For now, users wishing to move a particular leg of a RAID LV can use: 'lvconvert --replace old_pv vg/lv new_pv' commit 383575525916d4cafb1c8396c95a40be539d1451 Author: Jonathan Brassow <jbrassow> Date: Tue Dec 4 17:47:47 2012 -0600 pvmove/RAID: Disallow pvmove on RAID LVs until properly handled Attempting pvmove on RAID LVs replaces the kernel RAID target with a temporary pvmove target, ultimately destroying the RAID LV. pvmove must be prevented on RAID LVs for now. Use 'lvconvert --replace old_pv vg/lv new_pv' if you want to move an image of the RAID LV. Results: [root@bp-01 lvm2]# devices vg LV Cpy%Sync Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0) [lv_rimage_0] /dev/sdb1(1) [lv_rimage_1] /dev/sdc1(1) [lv_rmeta_0] /dev/sdb1(0) [lv_rmeta_1] /dev/sdc1(0) [root@bp-01 lvm2]# pvmove --name lv /dev/sdb1 /dev/sdd1 Skipping raid1 LV lv All data on source PV skipped. It contains locked, hidden or non-top level LVs only. No data to move for vg [root@bp-01 lvm2]# pvmove /dev/sdb1 /dev/sdd1 Skipping raid1 LV lv Skipping RAID sub-LV lv_rimage_0 Skipping RAID sub-LV lv_rmeta_0 Skipping RAID sub-LV lv_rimage_1 Skipping RAID sub-LV lv_rmeta_1 All data on source PV skipped. It contains locked, hidden or non-top level LVs only. No data to move for vg [root@bp-01 lvm2]# devices vg LV Cpy%Sync Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0) [lv_rimage_0] /dev/sdb1(1) [lv_rimage_1] /dev/sdc1(1) [lv_rmeta_0] /dev/sdb1(0) [lv_rmeta_1] /dev/sdc1(0) Adding QA ack based on last comment. (#11) Verified that pvmove is now disallowed on raid volumes. 2.6.32-348.el6.x86_64 lvm2-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 lvm2-libs-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 lvm2-cluster-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 udev-147-2.43.el6 BUILT: Thu Oct 11 05:59:38 CDT 2012 device-mapper-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 device-mapper-libs-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 device-mapper-event-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 device-mapper-event-libs-1.02.77-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 cmirror-2.02.98-6.el6 BUILT: Thu Dec 20 07:00:04 CST 2012 [root@hayes-01 ~]# lvs -a -o +devices LV VG Attr LSize Cpy%Sync Devices move_during_io rwi-a-r-- 2.00g 38.28 move_during_io_rimage_0(0),move_during_io_rimage_1(0) [move_during_io_rimage_0] Iwi-aor-- 2.00g /dev/etherd/e1.1p9(1) [move_during_io_rimage_1] Iwi-aor-- 2.00g /dev/etherd/e1.1p8(1) [move_during_io_rmeta_0] ewi-aor-- 4.00m /dev/etherd/e1.1p9(0) [move_during_io_rmeta_1] ewi-aor-- 4.00m /dev/etherd/e1.1p8(0) [root@hayes-01 ~]# pvmove -v /dev/etherd/e1.1p9 /dev/etherd/e1.1p10 Finding volume group "raid_sanity" Archiving volume group "raid_sanity" metadata (seqno 3). Creating logical volume pvmove0 Skipping raid1 LV move_during_io Skipping RAID sub-LV move_during_io_rimage_0 Skipping RAID sub-LV move_during_io_rmeta_0 Skipping RAID sub-LV move_during_io_rimage_1 Skipping RAID sub-LV move_during_io_rmeta_1 All data on source PV skipped. It contains locked, hidden or non-top level LVs only. No data to move for raid_sanity Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-0501.html |