RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1781406 - arg checking should allow stripe -> raid|mirror conversion or else give a better error
Summary: arg checking should allow stripe -> raid|mirror conversion or else give a bet...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.2
Hardware: x86_64
OS: Linux
high
low
Target Milestone: rc
: 8.0
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-09 23:08 UTC by Corey Marthaler
Modified: 2021-09-07 11:49 UTC (History)
9 users (show)

Fixed In Version: lvm2-2.03.11-0.2.20201103git8801a86.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 15:01:41 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2019-12-09 23:08:12 UTC
Description of problem:

# stripe creation
[root@hayes-02 ~]# lvcreate  -i 3 -L 4G -n convert cache_sanity @slow
  Using default stripesize 64.00 KiB.
  Rounding size 4.00 GiB (1024 extents) up to stripe boundary size <4.01 GiB(1026 extents).
  Logical volume "convert" created.

[root@hayes-02 ~]# lvs -a -o +devices
  LV      VG           Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                               
  convert cache_sanity -wi-a----- <4.01g                                                     /dev/sdn1(0),/dev/sdm1(0),/dev/sdl1(0)


# From the man page, -m is only for mirror images (default being raid1)
lvconvert(8)
        -m|--mirrors [+|-]Number

# My understanding would be -m +1 or -m 1 would be add a default raid1 leg (or interim conversion). I never gave a "--stripes" or "-R/--regionsize" argument, so that shouldn't be the error I get back.

[root@hayes-02 ~]# lvconvert --yes -m +1 cache_sanity/convert
  --stripes not allowed when converting striped LV cache_sanity/convert.
  -R/--regionsize not allowed when converting striped LV cache_sanity/convert.
  Logical volume cache_sanity/convert is already of requested type striped.
[root@hayes-02 ~]# lvconvert --yes -m 1 cache_sanity/convert
  --stripes not allowed when converting striped LV cache_sanity/convert.
  -R/--regionsize not allowed when converting striped LV cache_sanity/convert.
  Logical volume cache_sanity/convert is already of requested type striped.

# Adding mirror or raid images it totally allowed:
[root@hayes-02 ~]# lvconvert --yes --type mirror -m +1 cache_sanity/convert
  Logical volume cache_sanity/convert being converted.
  cache_sanity/convert: Converted: 0.88%
  cache_sanity/convert: Converted: 87.82%
  cache_sanity/convert: Converted: 100.00%
  Logical volume cache_sanity/convert is already of requested type striped.

[root@hayes-02 ~]# lvconvert --yes --type raid1 -m +1 cache_sanity/convert
  Replaced LV type raid1 with possible type raid5_n.
  Repeat this command to convert to raid1 after an interim conversion has finished.
  Logical volume cache_sanity/convert successfully converted.


Version-Release number of selected component (if applicable):
kernel-4.18.0-151.el8    BUILT: Fri Nov 15 13:14:53 CST 2019
lvm2-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-libs-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
lvm2-dbusd-2.03.07-1.el8    BUILT: Mon Dec  2 00:12:23 CST 2019
lvm2-lockd-2.03.07-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
boom-boot-1.0-0.2.20190610git246b116.el8    BUILT: Mon Jun 10 08:22:40 CDT 2019
device-mapper-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-event-libs-1.02.167-1.el8    BUILT: Mon Dec  2 00:09:32 CST 2019
device-mapper-persistent-data-0.8.5-2.el8    BUILT: Wed Jun  5 10:28:04 CDT 2019

Comment 1 Jonathan Earl Brassow 2019-12-11 21:15:30 UTC
> # My understanding would be -m +1 or -m 1 would be add a default raid1 leg (or interim conversion). I never gave a "--stripes" or "-R/--regionsize" argument, so that shouldn't be the error I get back.

The request here is simply to clean-up the inference of '--type raid1' and if necessary, the errors produced also.  Right?

Comment 2 Corey Marthaler 2019-12-13 16:10:35 UTC
(In reply to Jonathan Earl Brassow from comment #1)
> > # My understanding would be -m +1 or -m 1 would be add a default raid1 leg (or interim conversion). I never gave a "--stripes" or "-R/--regionsize" argument, so that shouldn't be the error I get back.
> 
> The request here is simply to clean-up the inference of '--type raid1' and
> if necessary, the errors produced also.  Right?


Correct. Either internally add the '--type raid1' and do the redundancy addition as if it had been provided on the cmdline, OR, give an error, "please provide a proper '--type raid' option when requesting additional images; Instead of the current "'--stripes' and '-R/--regionsize' not allowed" since I never gave either of those.

Comment 5 Heinz Mauelshagen 2020-07-13 17:18:13 UTC
lvm2 commits:
master      -> 8f421bdd7ae926ab95921fb36aedc5d35fc894cc
stable-2.02 -> 61e831aa5e09dfec25d6975f1c9950181c6a71f7

Comment 8 Heinz Mauelshagen 2020-11-19 16:41:20 UTC
# lvcreate -i3 -L500m -n t t
  Using default stripesize 64.00 KiB.
  Rounding size 500.00 MiB (125 extents) up to stripe boundary size 504.00 MiB(126 extents).
  Logical volume "t" created.

# lvs -ao+devices,segtype t
  LV   VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                              Type   
  t    t  -wi-a----- 504.00m                                                     /dev/sdag(0),/dev/sdg(0),/dev/sdf(0) striped

# lvconvert -y -m+1 t/t
  Replaced LV type raid1 with possible type raid5_n.
  Repeat this command to convert to raid1 after an interim conversion has finished.
  Logical volume t/t successfully converted.

# lvs -ao+devices,segtype t
  LV           VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                 Type   
  t            t  rwi-a-r--- 504.00m                                    100.00           t_rimage_0(0),t_rimage_1(0),t_rimage_2(0),t_rimage_3(0) raid5_n
  [t_rimage_0] t  iwi-aor--- 168.00m                                                     /dev/sdag(0)                                            linear 
  [t_rimage_1] t  iwi-aor--- 168.00m                                                     /dev/sdg(0)                                             linear 
  [t_rimage_2] t  iwi-aor--- 168.00m                                                     /dev/sdf(0)                                             linear 
  [t_rimage_3] t  iwi-aor--- 168.00m                                                     /dev/sde(1)                                             linear 
  [t_rmeta_0]  t  ewi-aor---   4.00m                                                     /dev/sdag(42)                                           linear 
  [t_rmeta_1]  t  ewi-aor---   4.00m                                                     /dev/sdg(42)                                            linear 
  [t_rmeta_2]  t  ewi-aor---   4.00m                                                     /dev/sdf(42)                                            linear 
  [t_rmeta_3]  t  ewi-aor---   4.00m                                                     /dev/sde(0)                                             linear 

# Mind --force is mandatory, because reducing stripes from 4 to 2 will reduce to 1/3 of the RaidLV capacity; lvextend before if that's to be avoided!
# lvconvert -y -m+1 -f t/t
  Using default stripesize 64.00 KiB.
  Converting raid5_n LV t/t to 2 stripes first.
  WARNING: Removing stripes from active logical volume t/t will shrink it from 504.00 MiB to 168.00 MiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  If that leaves the logical volume larger than 378 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 t/t"
  Logical volume t/t successfully converted.

# lvs -ao+devices,segtype t
  LV           VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                 Type   
  t            t  rwi-a-r-R- 504.00m                                    100.00           t_rimage_0(0),t_rimage_1(0),t_rimage_2(0),t_rimage_3(0) raid5_n
  [t_rimage_0] t  iwi-aor--- 172.00m                                                     /dev/sdag(0)                                            linear 
  [t_rimage_0] t  iwi-aor--- 172.00m                                                     /dev/sdag(43)                                           linear 
  [t_rimage_1] t  iwi-aor--- 172.00m                                                     /dev/sdg(0)                                             linear 
  [t_rimage_1] t  iwi-aor--- 172.00m                                                     /dev/sdg(43)                                            linear 
  [t_rimage_2] t  Iwi-aor-R- 172.00m                                                     /dev/sdf(0)                                             linear 
  [t_rimage_2] t  Iwi-aor-R- 172.00m                                                     /dev/sdf(43)                                            linear 
  [t_rimage_3] t  Iwi-aor-R- 172.00m                                                     /dev/sde(1)                                             linear 
  [t_rmeta_0]  t  ewi-aor---   4.00m                                                     /dev/sdag(42)                                           linear 
  [t_rmeta_1]  t  ewi-aor---   4.00m                                                     /dev/sdg(42)                                            linear 
  [t_rmeta_2]  t  ewi-aor-R-   4.00m                                                     /dev/sdf(42)                                            linear 
  [t_rmeta_3]  t  ewi-aor-R-   4.00m                                                     /dev/sde(0)                                             linear 

# lvconvert -y -m+1 t/t
  Using default stripesize 64.00 KiB.
  Converting raid5_n LV t/t to 2 stripes first.
  Logical volume t/t successfully converted.

# lvs -ao+devices,segtype t
  LV           VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                     Type   
  t            t  rwi-a-r--- 168.00m                                    100.00           t_rimage_0(0),t_rimage_1(0) raid5_n
  [t_rimage_0] t  iwi-aor--- 172.00m                                                     /dev/sdag(0)                linear 
  [t_rimage_0] t  iwi-aor--- 172.00m                                                     /dev/sdag(43)               linear 
  [t_rimage_1] t  iwi-aor--- 172.00m                                                     /dev/sdg(0)                 linear 
  [t_rimage_1] t  iwi-aor--- 172.00m                                                     /dev/sdg(43)                linear 
  [t_rmeta_0]  t  ewi-aor---   4.00m                                                     /dev/sdag(42)               linear 
  [t_rmeta_1]  t  ewi-aor---   4.00m                                                     /dev/sdg(42)                linear 

# lvconvert -y -m+1 t/t
  Using default stripesize 64.00 KiB.
  Logical volume t/t successfully converted.

# lvs -ao+devices,segtype t
  LV           VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                     Type  
  t            t  rwi-a-r--- 168.00m                                    100.00           t_rimage_0(0),t_rimage_1(0) raid1 
  [t_rimage_0] t  iwi-aor--- 168.00m                                                     /dev/sdag(1)                linear
  [t_rimage_0] t  iwi-aor--- 168.00m                                                     /dev/sdag(43)               linear
  [t_rimage_1] t  iwi-aor--- 168.00m                                                     /dev/sdg(1)                 linear
  [t_rimage_1] t  iwi-aor--- 168.00m                                                     /dev/sdg(43)                linear
  [t_rmeta_0]  t  ewi-aor---   4.00m                                                     /dev/sdag(42)               linear
  [t_rmeta_1]  t  ewi-aor---   4.00m                                                     /dev/sdg(42)                linear

Comment 10 Corey Marthaler 2020-12-07 18:55:08 UTC
Fix verified in the latest rpms.

kernel-4.18.0-257.el8    BUILT: Wed Dec  2 01:21:14 CST 2020
lvm2-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020
lvm2-libs-2.03.11-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020
device-mapper-1.02.175-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020
device-mapper-libs-1.02.175-0.2.20201103git8801a86.el8    BUILT: Wed Nov  4 07:04:46 CST 2020


Like Heinz mentions above however, a user wishing to go from stripe -> raid1 should really know what they're doing and the possible data loss of reshaping and removing stripe images to end up at the raid1 segtype. In the example below, it's 4 convert commands (one w/ a --force to remove images):


[root@hayes-02 ~]# lvcreate  -i 3 -L 4G -n convert cache_sanity /dev/sdb1 /dev/sdc1 /dev/sdd1
  Using default stripesize 64.00 KiB.
  Rounding size 4.00 GiB (1024 extents) up to stripe boundary size <4.01 GiB (1026 extents).
  Logical volume "convert" created.

[root@hayes-02 ~]# lvs -a -o +devices
  LV      VG           Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                               
  convert cache_sanity -wi-a----- <4.01g                                                     /dev/sdb1(0),/dev/sdc1(0),/dev/sdd1(0)

[root@hayes-02 ~]# lvconvert --yes -m +1 cache_sanity/convert
  Replaced LV type raid1 with possible type raid5_n.
  Repeat this command to convert to raid1 after an interim conversion has finished.
  Logical volume cache_sanity/convert successfully converted.

[root@hayes-02 ~]# lvs -a -o +devices,segtype
  LV                 VG           Attr       LSize  Cpy%Sync Convert Devices                                                                         Type   
  convert            cache_sanity rwi-a-r--- <4.01g 100.00           convert_rimage_0(0),convert_rimage_1(0),convert_rimage_2(0),convert_rimage_3(0) raid5_n
  [convert_rimage_0] cache_sanity iwi-aor--- <1.34g                  /dev/sdb1(0)                                                                    linear 
  [convert_rimage_1] cache_sanity iwi-aor--- <1.34g                  /dev/sdc1(0)                                                                    linear 
  [convert_rimage_2] cache_sanity iwi-aor--- <1.34g                  /dev/sdd1(0)                                                                    linear 
  [convert_rimage_3] cache_sanity iwi-aor--- <1.34g                  /dev/sde1(1)                                                                    linear 
  [convert_rmeta_0]  cache_sanity ewi-aor---  4.00m                  /dev/sdb1(342)                                                                  linear 
  [convert_rmeta_1]  cache_sanity ewi-aor---  4.00m                  /dev/sdc1(342)                                                                  linear 
  [convert_rmeta_2]  cache_sanity ewi-aor---  4.00m                  /dev/sdd1(342)                                                                  linear 
  [convert_rmeta_3]  cache_sanity ewi-aor---  4.00m                  /dev/sde1(0)                                                                    linear 

[root@hayes-02 ~]# lvconvert --yes -m +1 --force  cache_sanity/convert
  Using default stripesize 64.00 KiB.
  Converting raid5_n LV cache_sanity/convert to 2 stripes first.
  WARNING: Removing stripes from active logical volume cache_sanity/convert will shrink it from <4.01 GiB to <1.34 GiB!
  THIS MAY DESTROY (PARTS OF) YOUR DATA!
  If that leaves the logical volume larger than 3078 extents due to stripe rounding,
  you may want to grow the content afterwards (filesystem etc.)
  WARNING: to remove freed stripes after the conversion has finished, you have to run "lvconvert --stripes 1 cache_sanity/convert"
  Logical volume cache_sanity/convert successfully converted.

[root@hayes-02 ~]# lvs -a -o +devices,segtype
  LV                 VG           Attr       LSize  Cpy%Sync Convert Devices                                                                         Type   
  convert            cache_sanity rwi-a-r-s- <4.01g 75.70            convert_rimage_0(0),convert_rimage_1(0),convert_rimage_2(0),convert_rimage_3(0) raid5_n
  [convert_rimage_0] cache_sanity Iwi-aor--- <1.34g                  /dev/sdb1(0)                                                                    linear 
  [convert_rimage_0] cache_sanity Iwi-aor--- <1.34g                  /dev/sdb1(343)                                                                  linear 
  [convert_rimage_1] cache_sanity Iwi-aor--- <1.34g                  /dev/sdc1(0)                                                                    linear 
  [convert_rimage_1] cache_sanity Iwi-aor--- <1.34g                  /dev/sdc1(343)                                                                  linear 
  [convert_rimage_2] cache_sanity Iwi-aor-R- <1.34g                  /dev/sdd1(0)                                                                    linear 
  [convert_rimage_2] cache_sanity Iwi-aor-R- <1.34g                  /dev/sdd1(343)                                                                  linear 
  [convert_rimage_3] cache_sanity Iwi-aor-R- <1.34g                  /dev/sde1(1)                                                                    linear 
  [convert_rmeta_0]  cache_sanity ewi-aor---  4.00m                  /dev/sdb1(342)                                                                  linear 
  [convert_rmeta_1]  cache_sanity ewi-aor---  4.00m                  /dev/sdc1(342)                                                                  linear 
  [convert_rmeta_2]  cache_sanity ewi-aor-R-  4.00m                  /dev/sdd1(342)                                                                  linear 
  [convert_rmeta_3]  cache_sanity ewi-aor-R-  4.00m                  /dev/sde1(0)                                                                    linear 

[root@hayes-02 ~]# lvconvert --yes -m +1 --force  cache_sanity/convert
  Using default stripesize 64.00 KiB.
  Converting raid5_n LV cache_sanity/convert to 2 stripes first.
  Logical volume cache_sanity/convert successfully converted.

[root@hayes-02 ~]# lvs -a -o +devices,segtype
  LV                 VG           Attr       LSize  Cpy%Sync Convert Devices                                 Type   
  convert            cache_sanity rwi-a-r--- <1.34g 100.00           convert_rimage_0(0),convert_rimage_1(0) raid5_n
  [convert_rimage_0] cache_sanity iwi-aor--- <1.34g                  /dev/sdb1(0)                            linear 
  [convert_rimage_0] cache_sanity iwi-aor--- <1.34g                  /dev/sdb1(343)                          linear 
  [convert_rimage_1] cache_sanity iwi-aor--- <1.34g                  /dev/sdc1(0)                            linear 
  [convert_rimage_1] cache_sanity iwi-aor--- <1.34g                  /dev/sdc1(343)                          linear 
  [convert_rmeta_0]  cache_sanity ewi-aor---  4.00m                  /dev/sdb1(342)                          linear 
  [convert_rmeta_1]  cache_sanity ewi-aor---  4.00m                  /dev/sdc1(342)                          linear 

[root@hayes-02 ~]# lvconvert --yes -m +1 --force  cache_sanity/convert
  Using default stripesize 64.00 KiB.
  Logical volume cache_sanity/convert successfully converted.

[root@hayes-02 ~]# lvs -a -o +devices,segtype
  LV                 VG           Attr       LSize  Cpy%Sync Convert Devices                                 Type  
  convert            cache_sanity rwi-a-r--- <1.34g 100.00           convert_rimage_0(0),convert_rimage_1(0) raid1 
  [convert_rimage_0] cache_sanity iwi-aor--- <1.34g                  /dev/sdb1(1)                            linear
  [convert_rimage_0] cache_sanity iwi-aor--- <1.34g                  /dev/sdb1(343)                          linear
  [convert_rimage_1] cache_sanity iwi-aor--- <1.34g                  /dev/sdc1(1)                            linear
  [convert_rimage_1] cache_sanity iwi-aor--- <1.34g                  /dev/sdc1(343)                          linear
  [convert_rmeta_0]  cache_sanity ewi-aor---  4.00m                  /dev/sdb1(342)                          linear
  [convert_rmeta_1]  cache_sanity ewi-aor---  4.00m                  /dev/sdc1(342)                          linear

Comment 14 Corey Marthaler 2021-01-08 21:50:14 UTC
Verified in the latest nightly kernel/lvm2:

kernel-4.18.0-269.el8    BUILT: Thu Dec 31 07:52:55 CST 2020
lvm2-2.03.11-0.4.20201222gitb84a992.el8    BUILT: Tue Dec 22 06:33:49 CST 2020
lvm2-libs-2.03.11-0.4.20201222gitb84a992.el8    BUILT: Tue Dec 22 06:33:49 CST 2020


[root@hayes-02 ~]# lvcreate  -i 3 -L 4G -n convert cache_sanity @slow
  Using default stripesize 64.00 KiB.
  Rounding size 4.00 GiB (1024 extents) up to stripe boundary size <4.01 GiB (1026 extents).
  Logical volume "convert" created.
[root@hayes-02 ~]# lvs -a -o +devices
  LV      VG           Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                            
  convert cache_sanity -wi-a----- <4.01g                                                     /dev/sdb(0),/dev/sdh(0),/dev/sdi(0)
[root@hayes-02 ~]# lvconvert --yes -m +1 cache_sanity/convert
  Replaced LV type raid1 with possible type raid5_n.
  Repeat this command to convert to raid1 after an interim conversion has finished.
  Logical volume cache_sanity/convert successfully converted.

Comment 16 errata-xmlrpc 2021-05-18 15:01:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1659


Note You need to log in before you can comment on or make changes to this bug.