RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1191630 - LVM RAID - Add support for raid level takeover (part 1)
Summary: LVM RAID - Add support for raid level takeover (part 1)
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
Milan Navratil
URL:
Whiteboard:
Depends On:
Blocks: 834579 1189124 1346081 1366296
TreeView+ depends on / blocked
 
Reported: 2015-02-11 15:59 UTC by Heinz Mauelshagen
Modified: 2021-09-03 12:25 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.163-1.el7
Doc Type: Technology Preview
Doc Text:
`LVM` RAID-level takeover is now available RAID-level takeover, the ability to switch between RAID types, is now available as a Technology Preview. With RAID-level takeover, the user can decide based on their changing hardware characteristics what type of RAID configuration best suits their needs and make the change without having to deactivate the logical volume. For example, if a `striped` logical volume is created, it can be later converted to a RAID4 logical volume if an additional device is available. Starting with Red Hat Enterprise Linux 7.3, the following conversions are available as a Technology Preview: * striped <-> RAID4 * linear <-> RAID1 * mirror <-> RAID1 (mirror is a legacy type, but still supported)
Clone Of:
: 1366296 (view as bug list)
Environment:
Last Closed: 2016-11-04 04:09:02 UTC
Target Upstream Version:
Embargoed:
heinzm: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1445 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-11-03 13:46:41 UTC

Description Heinz Mauelshagen 2015-02-11 15:59:44 UTC
Description of problem:
lvm2 tools don't support conversions between RAID levels on logical volumes (this is one feature covered'reshape' takeover in MD internal notion), e.g.:
'raid0' <-> 'raid5'
(see related https://bugzilla.redhat.com/show_bug.cgi?id=1191594)
'raid1' <-> 'raid5'
'raid5' <-> 'raid6'

In addition, conversions from 'striped' <-> 'raid0' shall be supported to be
able to start out with existing 'striped' LVs and convert up to 'raid0/4/5/6'
and vice-versa.


Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. "lvconvert --type raid6 $LV" on a 'raid5' $LV

Actual results:
Error

Expected results:
Success

Additional info:
On switching RAID levels up, additional image and metadata internal LVs have to be allocated on distinct physical volumes to be added to the mapping. The dm-raid device-mapper target needs to be enhanced to cope with the conversion (see seperate kernel bz).

Comment 1 Heinz Mauelshagen 2016-07-06 14:30:14 UTC
Code's done and in review/integration.

Comment 3 Jonathan Earl Brassow 2016-08-11 14:36:01 UTC
An exhaustive compliment of conversions did not make the cut-off for this release.  A partial list of conversions did, they include:
 conversions between any of: striped, raid0, raid0_meta, raid4   
 conversions between any of: linear, raid1, mirror

The LVM team will implement test to validate all of these conversions as part of their regression test suite.  QA will please ensure (along with any other test) the consistency and coherency of the data through these conversions.

Take-overs (conversions from one RAID type to another) can be performed with the 'lvconvert' command, as follows:
# lvconvert --type <new_type> VG/lv_of_old_type

Comment 4 Alasdair Kergon 2016-08-11 14:40:53 UTC
raid5 and raid6 were not included at this point.

Comment 6 Corey Marthaler 2016-08-11 17:15:33 UTC
The first takeover attempt I tried doesn't appear to have been laid out properly. Is this actually what's expected when going from raid0 -> raid4?


[root@host-083 ~]# vgcreate test /dev/sd[abcdefgh]1                                                                                                                                                                                                     
  Physical volume "/dev/sdb1" successfully created.                                                                                                                                                                                                     
  Volume group "test" successfully created                                                                                                                                                                                                              
                                                                                                                                                                                                                                                        
[root@host-083 ~]# pvscan                                                                                                                                                                                                                               
  PV /dev/sda1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
  PV /dev/sdc1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
  PV /dev/sdd1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
  PV /dev/sde1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
  PV /dev/sdf1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
  PV /dev/sdg1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
  PV /dev/sdh1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
  PV /dev/sdb1   VG test            lvm2 [24.99 GiB / 24.99 GiB free]                                                                                                                                                                                   
                                                                                                                                                                                                                                                        
[root@host-083 ~]# lvcreate --type raid0 -L 100M -i 2 -n LV test                                                                                                                                                                                        
  Using default stripesize 64.00 KiB.                                                                                                                                                                                                                   
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents).                                                                                                                                                             
  Logical volume "LV" created.                                                                                                                                                                                                                          
                                                                                                                                                                                                                                                        
[root@host-083 ~]# lvs -a -o +devices                                                                                                                                                                                                                   
  LV            VG    Attr       LSize    Log Cpy%Sync Devices                                                                                                                                                                                          
  LV            test  rwi-a-r--- 104.00m               LV_rimage_0(0),LV_rimage_1(0)                                                                                                                                                                    
  [LV_rimage_0] test  iwi-aor---  52.00m               /dev/sda1(0)                                                                                                                                                                                     
  [LV_rimage_1] test  iwi-aor---  52.00m               /dev/sdc1(0)                                                                                                                                                                                     
                                                                                                                                                                                                                                                        
[root@host-083 ~]# lvconvert --type raid4 test/LV
  Using default stripesize 64.00 KiB.
  Logical volume test/LV successfully converted.

[root@host-083 ~]# lvs -a -o +devices
  LV                    VG    Attr       LSize    Log Cpy%Sync Devices
  LV                    test  rwi-a-r--- 104.00m      100.00   LV_rimage_0(0),LV_rimage_1(0),LV_rimage_2(0)
  [LV_rimage_0]         test  iwi-aor---  52.00m               /dev/sda1(0)
  [LV_rimage_0_rmeta_0] test  ewi-aor---   4.00m               /dev/sda1(13)
  [LV_rimage_1]         test  iwi-aor---  52.00m               /dev/sdc1(0)
  [LV_rimage_1_rmeta_0] test  ewi-aor---   4.00m               /dev/sda1(14)
  [LV_rimage_2]         test  iwi-aor---  52.00m               /dev/sdd1(1)
  [LV_rmeta_0]          test  ewi-aor---   4.00m               /dev/sdd1(0)

[root@host-083 ~]# lvcreate -i 2 --type raid4 -n LV2 test -L 100M
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB (26 extents).
  Logical volume "LV2" created.

[root@host-083 ~]# lvs -a -o +devices
  LV                    VG    Attr       LSize    Log Cpy%Sync Devices
  LV                    test  rwi-a-r--- 104.00m      100.00   LV_rimage_0(0),LV_rimage_1(0),LV_rimage_2(0)   
  [LV_rimage_0]         test  iwi-aor---  52.00m               /dev/sda1(0)
  [LV_rimage_0_rmeta_0] test  ewi-aor---   4.00m               /dev/sda1(13)
  [LV_rimage_1]         test  iwi-aor---  52.00m               /dev/sdc1(0)
  [LV_rimage_1_rmeta_0] test  ewi-aor---   4.00m               /dev/sda1(14)  <- why is sda being used in an rimage1 device?                                
  [LV_rimage_2]         test  iwi-aor---  52.00m               /dev/sdd1(1)
  [LV_rmeta_0]          test  ewi-aor---   4.00m               /dev/sdd1(0)   <- why is sdd being used in an rimage0 device?                                
# Also, where is the rmeta volume for rimage2?

# The above converted "raid4" volume looks nothing like and is laid out unlike the initially created raid4 volume below: 
  LV2                   test  rwi-a-r--- 104.00m      100.00   LV2_rimage_0(0),LV2_rimage_1(0),LV2_rimage_2(0)
  [LV2_rimage_0]        test  iwi-aor---  52.00m               /dev/sda1(16)
  [LV2_rimage_1]        test  iwi-aor---  52.00m               /dev/sdc1(14)
  [LV2_rimage_2]        test  iwi-aor---  52.00m               /dev/sdd1(15)
  [LV2_rmeta_0]         test  ewi-aor---   4.00m               /dev/sda1(15)
  [LV2_rmeta_1]         test  ewi-aor---   4.00m               /dev/sdc1(13)
  [LV2_rmeta_2]         test  ewi-aor---   4.00m               /dev/sdd1(14)


3.10.0-489.el7.x86_64

lvm2-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
lvm2-libs-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
lvm2-cluster-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-libs-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-event-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-event-libs-1.02.133-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016
cmirror-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016
sanlock-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
sanlock-lib-3.4.0-1.el7    BUILT: Fri Jun 10 11:41:03 CDT 2016
lvm2-lockd-2.02.163-1.el7    BUILT: Wed Aug 10 06:53:21 CDT 2016

Comment 7 Alasdair Kergon 2016-08-11 22:36:34 UTC
The double suffix on the internal rmeta LVs is just cosmetic - fixed with 
  https://git.fedorahosted.org/cgit/lvm2.git/commit/
  https://www.redhat.com/archives/lvm-devel/2016-August/msg00063.html

Comment 8 Corey Marthaler 2016-08-12 16:32:36 UTC
Filed bug 1366737, bug 1366738, and bug 1366739 for the issues listed above in comment #6.

Comment 9 Corey Marthaler 2016-08-12 17:52:23 UTC
Additional take over bugs: 
bug 1366749, bug 1366752, bug 1366760

Comment 12 Corey Marthaler 2016-09-29 21:36:02 UTC
Marking verified for TechPreview in the latest rpms. Applicable raid and mirror regression tests passed when run on raid/mirror converted volumes.


3.10.0-510.el7.x86_64

lvm2-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
lvm2-libs-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
lvm2-cluster-2.02.166-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-event-libs-1.02.135-1.el7    BUILT: Wed Sep 28 02:26:52 CDT 2016
device-mapper-persistent-data-0.6.3-1.el7    BUILT: Fri Jul 22 05:29:13 CDT 2016

Comment 14 errata-xmlrpc 2016-11-04 04:09:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-1445.html


Note You need to log in before you can comment on or make changes to this bug.