Bug 1447809

Summary: RAID TAKEOVER: conversion attempts *to* linear should execute interim possibilities until linear is achieved
Product: Red Hat Enterprise Linux 7 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Heinz Mauelshagen <heinzm>
lvm2 sub component: Mirroring and RAID QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: low    
Priority: unspecified CC: agk, cluster-qe, heinzm, jbrassow, mcsontos, msnitzer, prajnoha, rhandlin, zkabelac
Version: 7.4   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.02.180-6.el7 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-10-30 11:02:16 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2017-05-03 23:10:53 UTC
Description of problem:
This is the reverse version of bug 1439925.

[root@host-076 ~]# lvcreate -m 3 -L 100M --type mirror centipede2
  Logical volume "lvol0" created.
[root@host-076 ~]# lvcreate -m 3 -L 100M --type raid1 centipede2
  Logical volume "lvol1" created.

[root@host-076 ~]# lvconvert --type linear centipede2/lvol0
  Logical volume centipede2/lvol0 converted.
[root@host-076 ~]# lvconvert --type linear centipede2/lvol1
Are you sure you want to convert raid1 LV centipede2/lvol1 to type linear losing all resilience? [y/n]: y
  Logical volume centipede2/lvol1 successfully converted.


# raid5
[root@host-076 ~]# lvcreate -i 2 -L 100M --type raid5 centipede2
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB(26 extents).
  Logical volume "lvol2" created.
[root@host-076 ~]# lvconvert --type linear centipede2/lvol2
  --mirrors/-m is not compatible with raid5.

# This is what I would expect above instead of the not helpful "--mirrors" message.
[root@host-076 ~]# lvconvert --type raid10 centipede2/lvol2
  Using default stripesize 64.00 KiB.
  Unable to convert LV centipede2/lvol2 from raid5 to raid10.
  Converting centipede2/lvol2 from raid5 (same as raid5_ls) is directly possible to the following layouts:
    raid5_n
    raid5_la
    raid5_ra
    raid5_rs
    raid6_ls_6


# raid5_n
[root@host-076 ~]# lvcreate -i 2 -L 100M --type raid5_n centipede2
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB(26 extents).
  Logical volume "lvol3" created.
[root@host-076 ~]# lvconvert --type linear centipede2/lvol3
  --mirrors/-m is not compatible with raid5_n.

# raid6
[root@host-076 ~]# lvcreate -i 3 -L 100M --type raid6 centipede2
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "lvol4" created.
[root@host-076 ~]# lvconvert --type linear centipede2/lvol4
  --mirrors/-m is not compatible with raid6.

# raid10
[root@host-076 ~]# lvcreate -i 3 -L 100M --type raid10 centipede2
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "lvol5" created.
[root@host-076 ~]# lvconvert --type linear centipede2/lvol5
  --mirrors/-m cannot be changed with raid10.


Version-Release number of selected component (if applicable):
3.10.0-660.el7.x86_64

lvm2-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
lvm2-libs-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
lvm2-cluster-2.02.171-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-event-libs-1.02.140-1.el7    BUILT: Wed May  3 07:05:13 CDT 2017
device-mapper-persistent-data-0.7.0-0.1.rc6.el7    BUILT: Mon Mar 27 10:15:46 CDT 2017

Comment 3 Heinz Mauelshagen 2018-06-05 14:27:35 UTC
lvm2 upstream commit bd7cdd0b09ba123b064937fddde08daacbed7dab

Comment 9 Corey Marthaler 2018-08-13 19:48:53 UTC
All the segtypes listed in comment #0 appeared to have been properly converted to linear, *with the exception* of raid6 and raid10. 

3.10.0-931.el7.x86_64
lvm2-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
lvm2-libs-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
lvm2-cluster-2.02.180-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-libs-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-event-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-event-libs-1.02.149-2.el7    BUILT: Wed Aug  1 11:22:48 CDT 2018
device-mapper-persistent-data-0.7.3-3.el7    BUILT: Tue Nov 14 05:07:18 CST 2017





### 1. mirror and raid1 ###  (PASSES)
[root@host-093 ~]# lvcreate -m 3 -L 100M --type mirror VG
  Logical volume "lvol0" created.
[root@host-093 ~]# lvcreate -m 3 -L 100M --type raid1 VG
  Logical volume "lvol1" created.
[root@host-093 ~]# lvs -a -o +devices,segtype
  LV               VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log          Cpy%Sync Convert Devices                                                                 Type     
  lvol0            VG            mwi-a-m--- 100.00m                                  [lvol0_mlog] 100.00           lvol0_mimage_0(0),lvol0_mimage_1(0),lvol0_mimage_2(0),lvol0_mimage_3(0) mirror   
  [lvol0_mimage_0] VG            iwi-aom--- 100.00m                                                                /dev/sda1(0)                                                            linear   
  [lvol0_mimage_1] VG            iwi-aom--- 100.00m                                                                /dev/sdb1(0)                                                            linear   
  [lvol0_mimage_2] VG            iwi-aom--- 100.00m                                                                /dev/sdc1(0)                                                            linear   
  [lvol0_mimage_3] VG            iwi-aom--- 100.00m                                                                /dev/sdd1(0)                                                            linear   
  [lvol0_mlog]     VG            lwi-aom---   4.00m                                                                /dev/sdh1(0)                                                            linear   
  lvol1            VG            rwi-a-r--- 100.00m                                               100.00           lvol1_rimage_0(0),lvol1_rimage_1(0),lvol1_rimage_2(0),lvol1_rimage_3(0) raid1    
  [lvol1_rimage_0] VG            iwi-aor--- 100.00m                                                                /dev/sda1(26)                                                           linear   
  [lvol1_rimage_1] VG            iwi-aor--- 100.00m                                                                /dev/sdb1(26)                                                           linear   
  [lvol1_rimage_2] VG            iwi-aor--- 100.00m                                                                /dev/sdc1(26)                                                           linear   
  [lvol1_rimage_3] VG            iwi-aor--- 100.00m                                                                /dev/sdd1(26)                                                           linear   
  [lvol1_rmeta_0]  VG            ewi-aor---   4.00m                                                                /dev/sda1(25)                                                           linear   
  [lvol1_rmeta_1]  VG            ewi-aor---   4.00m                                                                /dev/sdb1(25)                                                           linear   
  [lvol1_rmeta_2]  VG            ewi-aor---   4.00m                                                                /dev/sdc1(25)                                                           linear   
  [lvol1_rmeta_3]  VG            ewi-aor---   4.00m                                                                /dev/sdd1(25)                                                           linear   

[root@host-093 ~]# lvconvert --yes --type linear VG/lvol0
  Logical volume VG/lvol0 converted.
[root@host-093 ~]# lvconvert --yes --type linear VG/lvol1
  Logical volume VG/lvol1 successfully converted.
[root@host-093 ~]# lvs -a -o +devices,segtype
  LV              VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         Type     
  lvol0           VG            -wi-a----- 100.00m                                                       /dev/sda1(0)    linear   
  lvol1           VG            -wi-a----- 100.00m                                                       /dev/sda1(26)   linear   



### 2. raid5 ###  (PASSES)
[root@host-093 ~]# lvcreate -i 2 -L 100M --type raid5 VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB(26 extents).
  Logical volume "lvol2" created.
[root@host-093 ~]# lvs -a -o +devices,segtype
  LV               VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                               Type     
  lvol2            VG            rwi-a-r--- 104.00m                                      100.00           lvol2_rimage_0(0),lvol2_rimage_1(0),lvol2_rimage_2(0) raid5    
  [lvol2_rimage_0] VG            iwi-aor---  52.00m                                                       /dev/sda1(52)                                         linear   
  [lvol2_rimage_1] VG            iwi-aor---  52.00m                                                       /dev/sdb1(1)                                          linear   
  [lvol2_rimage_2] VG            iwi-aor---  52.00m                                                       /dev/sdc1(1)                                          linear   
  [lvol2_rmeta_0]  VG            ewi-aor---   4.00m                                                       /dev/sda1(51)                                         linear   
  [lvol2_rmeta_1]  VG            ewi-aor---   4.00m                                                       /dev/sdb1(0)                                          linear   
  [lvol2_rmeta_2]  VG            ewi-aor---   4.00m                                                       /dev/sdc1(0)                                          linear   
[root@host-093 ~]# lvconvert --yes --type linear VG/lvol2
  Replaced LV type linear with possible type raid5_n.
  Repeat this command to convert to linear after an interim conversion has finished.
  Converting raid5 (same as raid5_ls) LV VG/lvol2 to raid5_n.
  Logical volume VG/lvol2 successfully converted.

# Note --force required here
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol2
  Converting raid5_n LV VG/lvol2 to 2 stripes first.
  Replaced LV type linear with possible type raid5_n.
  Repeat this command to convert to linear after an interim conversion has finished.
  WARNING: Removing stripes from active logical volume VG/lvol2 will shrink it from 104.00 MiB to 52.00 MiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  If that leaves the logical volume larger than 52 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 VG/lvol2"
  Logical volume VG/lvol2 successfully converted.
[root@host-093 ~]# lvs -a -o +devices,segtype
  LV               VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                               Type     
  lvol2            VG            rwi-a-r-R- 104.00m                                      100.00           lvol2_rimage_0(0),lvol2_rimage_1(0),lvol2_rimage_2(0) raid5_n  
  [lvol2_rimage_0] VG            iwi-aor---  56.00m                                                       /dev/sda1(53)                                         linear   
  [lvol2_rimage_0] VG            iwi-aor---  56.00m                                                       /dev/sda1(52)                                         linear   
  [lvol2_rimage_1] VG            iwi-aor---  56.00m                                                       /dev/sdb1(2)                                          linear   
  [lvol2_rimage_1] VG            iwi-aor---  56.00m                                                       /dev/sdb1(1)                                          linear   
  [lvol2_rimage_2] VG            Iwi-aor-R-  56.00m                                                       /dev/sdc1(2)                                          linear   
  [lvol2_rimage_2] VG            Iwi-aor-R-  56.00m                                                       /dev/sdc1(1)                                          linear   
  [lvol2_rmeta_0]  VG            ewi-aor---   4.00m                                                       /dev/sda1(51)                                         linear   
  [lvol2_rmeta_1]  VG            ewi-aor---   4.00m                                                       /dev/sdb1(0)                                          linear   
  [lvol2_rmeta_2]  VG            ewi-aor-R-   4.00m                                                       /dev/sdc1(0)                                          linear   
[root@host-093 ~]# lvconvert --stripes 1 VG/lvol2
  Logical volume VG/lvol2 successfully converted.
[root@host-093 ~]# lvs -a -o +devices,segtype
  LV               VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                             Type     
  lvol0            VG            -wi-a----- 100.00m                                                       /dev/sda1(0)                        linear   
  lvol1            VG            -wi-a----- 100.00m                                                       /dev/sda1(26)                       linear   
  lvol2            VG            rwi-a-r---  52.00m                                      100.00           lvol2_rimage_0(0),lvol2_rimage_1(0) raid5_n  
  [lvol2_rimage_0] VG            iwi-aor---  56.00m                                                       /dev/sda1(53)                       linear   
  [lvol2_rimage_0] VG            iwi-aor---  56.00m                                                       /dev/sda1(52)                       linear   
  [lvol2_rimage_1] VG            iwi-aor---  56.00m                                                       /dev/sdb1(2)                        linear   
  [lvol2_rimage_1] VG            iwi-aor---  56.00m                                                       /dev/sdb1(1)                        linear   
  [lvol2_rmeta_0]  VG            ewi-aor---   4.00m                                                       /dev/sda1(51)                       linear   
  [lvol2_rmeta_1]  VG            ewi-aor---   4.00m                                                       /dev/sdb1(0)                        linear   
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol2
  Replaced LV type linear with possible type raid1.
  Repeat this command to convert to linear after an interim conversion has finished.
  Logical volume VG/lvol2 successfully converted.
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol2
  Logical volume VG/lvol2 successfully converted.
[root@host-093 ~]# lvs -a -o +devices,segtype
  LV              VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices         Type     
  lvol2           VG            -wi-a-----  52.00m                                                       /dev/sda1(54)   linear   
  lvol2           VG            -wi-a-----  52.00m                                                       /dev/sda1(52)   linear   



### 3. raid5_n ###  (PASSES)
[root@host-093 ~]# lvcreate -i 2 -L 100M --type raid5_n VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 104.00 MiB(26 extents).
  Logical volume "lvol0" created.
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol0
  Converting raid5_n LV VG/lvol0 to 2 stripes first.
  Replaced LV type linear with possible type raid5_n.
  Repeat this command to convert to linear after an interim conversion has finished.
  WARNING: Removing stripes from active logical volume VG/lvol0 will shrink it from 104.00 MiB to 52.00 MiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  If that leaves the logical volume larger than 52 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 VG/lvol0"
  Logical volume VG/lvol0 successfully converted.
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol0
  Converting raid5_n LV VG/lvol0 to 2 stripes first.
  Replaced LV type linear with possible type raid5_n.
  Repeat this command to convert to linear after an interim conversion has finished.
  Logical volume VG/lvol0 successfully converted.
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol0
  Replaced LV type linear with possible type raid1.
  Repeat this command to convert to linear after an interim conversion has finished.
  Logical volume VG/lvol0 successfully converted.
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol0
  Logical volume VG/lvol0 successfully converted.



### 4. raid6 ###  (FAILS)
[root@host-093 ~]# lvcreate -i 3 -L 100M --type raid6 VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "lvol0" created.
[root@host-093 ~]# lvconvert --type linear VG/lvol0
  Replaced LV type linear with possible type raid6_n_6.
  Repeat this command to convert to linear after an interim conversion has finished.
  Converting raid6 (same as raid6_zr) LV VG/lvol0 to raid6_n_6.
Are you sure you want to convert raid6 LV VG/lvol0? [y/n]: ^C  Interrupted...
  Logical volume VG/lvol0 NOT converted.
  Reshape request failed on LV VG/lvol0.
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol0
  Replaced LV type linear with possible type raid6_n_6.
  Repeat this command to convert to linear after an interim conversion has finished.
  Converting raid6 (same as raid6_zr) LV VG/lvol0 to raid6_n_6.
  Logical volume VG/lvol0 successfully converted.
[root@host-093 ~]# lvconvert --yes --force --type linear VG/lvol0
  Unable to convert LV VG/lvol0 from raid6_n_6 to linear.
  Converting VG/lvol0 from raid6_n_6 is directly possible to the following layouts:
    raid0
    raid0_meta
    striped
    raid4
    raid5_n
    raid6_nc
    raid6_nr
    raid6_zr
    raid6_la_6
    raid6_ls_6
    raid6_ra_6
    raid6_rs_6


### 5. raid10 ###  (FAILS)
[root@host-093 ~]# lvcreate -i 3 -L 100M --type raid10 VG
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "lvol0" created.
[root@host-093 ~]# lvconvert --type linear VG/lvol0
  Unable to convert LV VG/lvol0 from raid10 to linear.
  Converting VG/lvol0 from raid10 (same as raid10_near) is directly possible to the following layouts:
    raid0
    raid0_meta
    striped

Comment 11 Corey Marthaler 2018-08-17 16:23:38 UTC
Bug 1618806 may block the verification of this issue.

Comment 12 Heinz Mauelshagen 2018-08-21 15:39:12 UTC
(In reply to Corey Marthaler from comment #11)
> Bug 1618806 may block the verification of this issue.

It won't: as Corey says in
https://bugzilla.redhat.com/show_bug.cgi?id=1618806#c4,
lvm warns about data loss caused by shrinking the LV when
converting from raid5 to linear (as with removing stripes from raid4/5/6).

Comment 13 Heinz Mauelshagen 2018-08-22 15:14:14 UTC
lvm2 upstream commit e83c4f07ca4a84808178d5d22cba655e5e370cd8

Comment 14 Corey Marthaler 2018-08-27 21:09:59 UTC
Both raid6 and raid10 are now capable of eventually being converted to liner, however the process is a bit convoluted. Marking verified in the latest rpms.


## raid6 -> linear
[root@hayes-01 ~]# lvcreate -i 3 -L 100M --type raid6 centipede2
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "lvol0" created.

# conversion to raid6_n_6
[root@hayes-01 ~]# lvconvert --yes --force --type linear centipede2/lvol0
  Replaced LV type linear with possible type raid6_n_6.
  Repeat this command to convert to linear after an interim conversion has finished.
  Converting raid6 (same as raid6_zr) LV centipede2/lvol0 to raid6_n_6.
  Logical volume centipede2/lvol0 successfully converted.

# first raid5_n conversion
[root@hayes-01 ~]# lvconvert --yes --force --type linear centipede2/lvol0
  Replaced LV type linear with possible type raid5_n.
  Repeat this command to convert to linear after an interim conversion has finished.
  Logical volume centipede2/lvol0 successfully converted.

# second raid5_n (2 stripes) conversion
[root@hayes-01 ~]# lvconvert --yes --force --type linear centipede2/lvol0
  Converting raid5_n LV centipede2/lvol0 to 2 stripes first.
  WARNING: Removing stripes from active logical volume centipede2/lvol0 will shrink it from 108.00 MiB to 36.00 MiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  If that leaves the logical volume larger than 81 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 centipede2/lvol0"
  Logical volume centipede2/lvol0 successfully converted.

# third raid5_n (2 stripes) conversion? Is this one reall necessary?
[root@hayes-01 ~]# lvconvert --yes --force --type linear centipede2/lvol0
  Converting raid5_n LV centipede2/lvol0 to 2 stripes first.
  Logical volume centipede2/lvol0 successfully converted.

# conversion to raid1
[root@hayes-01 ~]# lvconvert --yes --force --type linear centipede2/lvol0
  Replaced LV type linear with possible type raid1.
  Repeat this command to convert to linear after an interim conversion has finished.
  Logical volume centipede2/lvol0 successfully converted.

# finally linear
[root@hayes-01 ~]# lvconvert --yes --force --type linear centipede2/lvol0
  Logical volume centipede2/lvol0 successfully converted.

[root@hayes-01 ~]# lvs -a -o +devices,segtype
  LV    VG         Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices      Type  
  lvol0 centipede2 -wi-a----- 36.00m                                                     /dev/sdb1(3) linear





## raid10 -> linear
[root@hayes-01 ~]# lvcreate -i 3 -L 100M --type raid10  centipede2
  Using default stripesize 64.00 KiB.
  Rounding size 100.00 MiB (25 extents) up to stripe boundary size 108.00 MiB(27 extents).
  Logical volume "lvol0" created.

[root@hayes-01 ~]# lvconvert --type linear centipede2/lvol0
  Replaced LV type linear with possible type raid0_meta.
  Repeat this command to convert to linear after an interim conversion has finished.
Are you sure you want to convert raid10 LV centipede2/lvol0 to raid0_meta type? [y/n]: y
  Logical volume centipede2/lvol0 successfully converted.

[root@hayes-01 ~]# lvconvert --type linear centipede2/lvol0
  Replaced LV type linear with possible type raid5_n.
  Repeat this command to convert to linear after an interim conversion has finished.
Are you sure you want to convert raid0_meta LV centipede2/lvol0 to raid5_n type? [y/n]: y
  Logical volume centipede2/lvol0 successfully converted.

[root@hayes-01 ~]# lvconvert --type linear centipede2/lvol0
  Converting raid5_n LV centipede2/lvol0 to 2 stripes first.
  WARNING: Removing stripes from active logical volume centipede2/lvol0 will shrink it from 108.00 MiB to 36.00 MiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  Interrupt the conversion and run "lvresize -y -l81 centipede2/lvol0" to keep the current size if not done already!
  If that leaves the logical volume larger than 81 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 centipede2/lvol0"
  Can't remove stripes without --force option.
  Reshape request failed on LV centipede2/lvol0.

[root@hayes-01 ~]# lvconvert --force --stripes 1 centipede2/lvol0
  WARNING: Removing stripes from active logical volume centipede2/lvol0 will shrink it from 108.00 MiB to 36.00 MiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  Interrupt the conversion and run "lvresize -y -l81 centipede2/lvol0" to keep the current size if not done already!
  If that leaves the logical volume larger than 81 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 centipede2/lvol0"
Are you sure you want to remove 2 images from raid5_n LV centipede2/lvol0? [y/n]: y
  Logical volume centipede2/lvol0 successfully converted.

[root@hayes-01 ~]# lvconvert --type linear centipede2/lvol0
  Converting raid5_n LV centipede2/lvol0 to 2 stripes first.
  Logical volume centipede2/lvol0 successfully converted.
[root@hayes-01 ~]# lvconvert --type linear centipede2/lvol0
  Replaced LV type linear with possible type raid1.
  Repeat this command to convert to linear after an interim conversion has finished.
Are you sure you want to convert raid5_n LV centipede2/lvol0 to raid1 type? [y/n]: y
  Logical volume centipede2/lvol0 successfully converted.

[root@hayes-01 ~]# lvconvert --type linear centipede2/lvol0
Are you sure you want to convert raid1 LV centipede2/lvol0 to type linear losing all resilience? [y/n]: y
  Logical volume centipede2/lvol0 successfully converted.

[root@hayes-01 ~]# lvs -a -o +devices,segtype
  LV    VG         Attr       LSize   Devices      Type  
  lvol0 centipede2 -wi-a----- 36.00m  /dev/sdb1(2) linear

Comment 17 errata-xmlrpc 2018-10-30 11:02:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3193