RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1458006 - RAID TAKEOVER: lvm should choose direct path to desired raid level
Summary: RAID TAKEOVER: lvm should choose direct path to desired raid level
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1468600
TreeView+ depends on / blocked
 
Reported: 2017-06-01 18:14 UTC by Corey Marthaler
Modified: 2021-09-03 12:39 UTC (History)
7 users (show)

Fixed In Version: 2.02.172
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1468600 (view as bug list)
Environment:
Last Closed: 2017-08-18 11:21:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2017-06-01 18:14:55 UTC
Description of problem:
Found this while testing the new expected behavior for bug 1439403. Originally, lvm was going to fail on any none direct raid takeover paths and give the user current possible options. However, that appears to have changed now in the latest rpms (171-3), and is now going to attempt the first step towards the desired raid level. With that in mind, lvm should be choosing the most direct route.


### raid4 -> 10 manually (2 steps)
[root@host-128 ~]# lvcreate  --type raid4 -i 3 -n LV -L 500M VG
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents).
  Logical volume "LV" created.

[root@host-128 ~]# lvconvert --type raid0_meta VG/LV
  Using default stripesize 64.00 KiB.
Are you sure you want to convert raid4 LV VG/LV to raid0_meta type? [y/n]: y
  Logical volume VG/LV successfully converted.

# No need to wait for sync to complete either

[root@host-128 ~]# lvconvert --type raid10 VG/LV
  Using default stripesize 64.00 KiB.
Are you sure you want to convert raid0_meta LV VG/LV to raid10 type? [y/n]: y
  Logical volume VG/LV successfully converted.




### raid4 -> 10 letting lvm decide (3 steps and a wait for sync to complete)
[root@host-128 ~]# lvcreate  --type raid4 -i 3 -n LV -L 500M VG
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents).
  Logical volume "LV" created.

# Now the expected behavior is to just keep running the same command over and over until it finally gets there? That seems like unprecedented lvm behavior.

[root@host-128 ~]# lvconvert --type raid10 VG/LV
  Using default stripesize 64.00 KiB.
  Replaced LV type raid10 (same as raid10_near) with possible type raid5_n.
  Repeat this command to convert to raid10 after an interim conversion has finished.
  Converting raid4 LV VG/LV to raid5_n.
Are you sure you want to convert raid4 LV VG/LV? [y/n]: y
  Logical volume VG/LV successfully converted.

[root@host-128 ~]# lvconvert --type raid10 VG/LV
  Using default stripesize 64.00 KiB.
  Replaced LV type raid10 (same as raid10_near) with possible type raid0_meta.
  Repeat this command to convert to raid10 after an interim conversion has finished.
Are you sure you want to convert raid5_n LV VG/LV to raid0_meta type? [y/n]: y
  Unable to convert VG/LV while it is not in-sync.

# Need to wait for sync and try again...

[root@host-128 ~]# lvconvert --type raid10 VG/LV
  Using default stripesize 64.00 KiB.
  Replaced LV type raid10 (same as raid10_near) with possible type raid0_meta.
  Repeat this command to convert to raid10 after an interim conversion has finished.
Are you sure you want to convert raid5_n LV VG/LV to raid0_meta type? [y/n]: y
  Logical volume VG/LV successfully converted.

[root@host-128 ~]# lvconvert --type raid10 VG/LV
  Using default stripesize 64.00 KiB.
Are you sure you want to convert raid0_meta LV VG/LV to raid10 type? [y/n]: y
  Logical volume VG/LV successfully converted.



Version-Release number of selected component (if applicable):
3.10.0-666.el7.x86_64

lvm2-2.02.171-3.el7    BUILT: Wed May 31 08:36:29 CDT 2017
lvm2-libs-2.02.171-3.el7    BUILT: Wed May 31 08:36:29 CDT 2017
lvm2-cluster-2.02.171-3.el7    BUILT: Wed May 31 08:36:29 CDT 2017
device-mapper-1.02.140-3.el7    BUILT: Wed May 31 08:36:29 CDT 2017
device-mapper-libs-1.02.140-3.el7    BUILT: Wed May 31 08:36:29 CDT 2017
device-mapper-event-1.02.140-3.el7    BUILT: Wed May 31 08:36:29 CDT 2017
device-mapper-event-libs-1.02.140-3.el7    BUILT: Wed May 31 08:36:29 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017

Comment 2 Heinz Mauelshagen 2017-06-02 12:33:42 UTC
Upstream commit 3217e0cfeaed313332a0b69437ee87ca1f4b9e5e

Comment 3 Heinz Mauelshagen 2017-06-06 17:36:21 UTC
Remove superfluous raid5_n interim LV type from raid4 -> raid10 conversion:

# lvs -aoname,size,segtype,attr,copypercent,stripes nvm
  LV           LSize  Type   Attr       Cpy%Sync #Str
  r            36.00m raid4  rwi-a-r--- 100.00      4
  [r_rimage_0] 12.00m linear iwi-aor---             1
  [r_rimage_1] 12.00m linear iwi-aor---             1
  [r_rimage_2] 12.00m linear iwi-aor---             1
  [r_rimage_3] 12.00m linear iwi-aor---             1
  [r_rmeta_0]   4.00m linear ewi-aor---             1
  [r_rmeta_1]   4.00m linear ewi-aor---             1
  [r_rmeta_2]   4.00m linear ewi-aor---             1
  [r_rmeta_3]   4.00m linear ewi-aor---             1

# lvconvert --type raid10 nvm/r
  Using default stripesize 64.00 KiB.
  Replaced LV type raid10 (same as raid10_near) with possible type raid0_meta.
  Repeat this command to convert to raid10 after an interim conversion has finished.
Are you sure you want to convert raid4 LV nvm/r to raid0_meta type? [y/n]: y
  Logical volume nvm/r successfully converted.

# lvconvert --type raid10 nvm/r
  Using default stripesize 64.00 KiB.
Are you sure you want to convert raid0_meta LV nvm/r to raid10 type? [y/n]: y

# lvs -aoname,size,segtype,attr,copypercent,stripes nvm
  LV           LSize  Type   Attr       Cpy%Sync #Str
  r            36.00m raid10 rwi-a-r--- 100.00      6
  [r_rimage_0] 12.00m linear iwi-aor---             1
  [r_rimage_1] 12.00m linear iwi-aor---             1
  [r_rimage_2] 12.00m linear iwi-aor---             1
  [r_rimage_3] 12.00m linear iwi-aor---             1
  [r_rimage_4] 12.00m linear iwi-aor---             1
  [r_rimage_5] 12.00m linear iwi-aor---             1
  [r_rmeta_0]   4.00m linear ewi-aor---             1
  [r_rmeta_1]   4.00m linear ewi-aor---             1
  [r_rmeta_2]   4.00m linear ewi-aor---             1
  [r_rmeta_3]   4.00m linear ewi-aor---             1
  [r_rmeta_4]   4.00m linear ewi-aor---             1
  [r_rmeta_5]   4.00m linear ewi-aor---             1


Note You need to log in before you can comment on or make changes to this bug.