Red Hat Bugzilla – Bug 1191630
LVM RAID - Add support for raid level takeover (part 1)
Last modified: 2016-11-13 15:06:59 EST
Description of problem: lvm2 tools don't support conversions between RAID levels on logical volumes (this is one feature covered'reshape' takeover in MD internal notion), e.g.: 'raid0' <-> 'raid5' (see related https://bugzilla.redhat.com/show_bug.cgi?id=1191594) 'raid1' <-> 'raid5' 'raid5' <-> 'raid6' In addition, conversions from 'striped' <-> 'raid0' shall be supported to be able to start out with existing 'striped' LVs and convert up to 'raid0/4/5/6' and vice-versa. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. "lvconvert --type raid6 $LV" on a 'raid5' $LV Actual results: Error Expected results: Success Additional info: On switching RAID levels up, additional image and metadata internal LVs have to be allocated on distinct physical volumes to be added to the mapping. The dm-raid device-mapper target needs to be enhanced to cope with the conversion (see seperate kernel bz).
Code's done and in review/integration.
An exhaustive compliment of conversions did not make the cut-off for this release. A partial list of conversions did, they include: conversions between any of: striped, raid0, raid0_meta, raid4 conversions between any of: linear, raid1, mirror The LVM team will implement test to validate all of these conversions as part of their regression test suite. QA will please ensure (along with any other test) the consistency and coherency of the data through these conversions. Take-overs (conversions from one RAID type to another) can be performed with the 'lvconvert' command, as follows: # lvconvert --type <new_type> VG/lv_of_old_type
raid5 and raid6 were not included at this point.
The first takeover attempt I tried doesn't appear to have been laid out properly. Is this actually what's expected when going from raid0 -> raid4? [root@host-083 ~]# vgcreate test /dev/sd[abcdefgh]1 Physical volume "/dev/sdb1" successfully created. Volume group "test" successfully created [root@host-083 ~]# pvscan PV /dev/sda1 VG test lvm2 [24.99 GiB / 24.99 GiB free] PV /dev/sdc1 VG test lvm2 [24.99 GiB / 24.99 GiB free] PV /dev/sdd1 VG test lvm2 [24.99 GiB / 24.99 GiB free] PV /dev/sde1 VG test lvm2 [24.99 GiB / 24.99 GiB free] PV /dev/sdf1 VG test lvm2 [24.99 GiB / 24.99 GiB free] PV /dev/sdg1 VG test lvm2 [24.99 GiB / 24.99 GiB free] PV /dev/sdh1 VG test lvm2 [24.99 GiB / 24.99 GiB free] PV /dev/sdb1 VG test lvm2 [24.99 GiB / 24.99 GiB free] [root@host-083 ~]# lvcreate --type raid0 -L 100M -i 2 -n LV test Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "LV" created. [root@host-083 ~]# lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices LV test rwi-a-r--- 104.00m LV_rimage_0(0),LV_rimage_1(0) [LV_rimage_0] test iwi-aor--- 52.00m /dev/sda1(0) [LV_rimage_1] test iwi-aor--- 52.00m /dev/sdc1(0) [root@host-083 ~]# lvconvert --type raid4 test/LV Using default stripesize 64.00 KiB. Logical volume test/LV successfully converted. [root@host-083 ~]# lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices LV test rwi-a-r--- 104.00m 100.00 LV_rimage_0(0),LV_rimage_1(0),LV_rimage_2(0) [LV_rimage_0] test iwi-aor--- 52.00m /dev/sda1(0) [LV_rimage_0_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(13) [LV_rimage_1] test iwi-aor--- 52.00m /dev/sdc1(0) [LV_rimage_1_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(14) [LV_rimage_2] test iwi-aor--- 52.00m /dev/sdd1(1) [LV_rmeta_0] test ewi-aor--- 4.00m /dev/sdd1(0) [root@host-083 ~]# lvcreate -i 2 --type raid4 -n LV2 test -L 100M Using default stripesize 64.00 KiB. Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents). Logical volume "LV2" created. [root@host-083 ~]# lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices LV test rwi-a-r--- 104.00m 100.00 LV_rimage_0(0),LV_rimage_1(0),LV_rimage_2(0) [LV_rimage_0] test iwi-aor--- 52.00m /dev/sda1(0) [LV_rimage_0_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(13) [LV_rimage_1] test iwi-aor--- 52.00m /dev/sdc1(0) [LV_rimage_1_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(14) <- why is sda being used in an rimage1 device? [LV_rimage_2] test iwi-aor--- 52.00m /dev/sdd1(1) [LV_rmeta_0] test ewi-aor--- 4.00m /dev/sdd1(0) <- why is sdd being used in an rimage0 device? # Also, where is the rmeta volume for rimage2? # The above converted "raid4" volume looks nothing like and is laid out unlike the initially created raid4 volume below: LV2 test rwi-a-r--- 104.00m 100.00 LV2_rimage_0(0),LV2_rimage_1(0),LV2_rimage_2(0) [LV2_rimage_0] test iwi-aor--- 52.00m /dev/sda1(16) [LV2_rimage_1] test iwi-aor--- 52.00m /dev/sdc1(14) [LV2_rimage_2] test iwi-aor--- 52.00m /dev/sdd1(15) [LV2_rmeta_0] test ewi-aor--- 4.00m /dev/sda1(15) [LV2_rmeta_1] test ewi-aor--- 4.00m /dev/sdc1(13) [LV2_rmeta_2] test ewi-aor--- 4.00m /dev/sdd1(14) 3.10.0-489.el7.x86_64 lvm2-2.02.163-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 lvm2-libs-2.02.163-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 lvm2-cluster-2.02.163-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 device-mapper-1.02.133-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 device-mapper-libs-1.02.133-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 device-mapper-event-1.02.133-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 device-mapper-event-libs-1.02.133-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.163-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016 sanlock-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 sanlock-lib-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 lvm2-lockd-2.02.163-1.el7 BUILT: Wed Aug 10 06:53:21 CDT 2016
The double suffix on the internal rmeta LVs is just cosmetic - fixed with https://git.fedorahosted.org/cgit/lvm2.git/commit/ https://www.redhat.com/archives/lvm-devel/2016-August/msg00063.html
Filed bug 1366737, bug 1366738, and bug 1366739 for the issues listed above in comment #6.
Additional take over bugs: bug 1366749, bug 1366752, bug 1366760
Marking verified for TechPreview in the latest rpms. Applicable raid and mirror regression tests passed when run on raid/mirror converted volumes. 3.10.0-510.el7.x86_64 lvm2-2.02.166-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 lvm2-libs-2.02.166-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 lvm2-cluster-2.02.166-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-libs-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-event-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-event-libs-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html