RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1796958 - Do not allow reshape of a raid5 thinpool
Summary: Do not allow reshape of a raid5 thinpool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.2
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: rc
: 8.0
Assignee: Marian Csontos
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1784695
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-31 15:25 UTC by Marian Csontos
Modified: 2021-09-07 11:52 UTC (History)
13 users (show)

Fixed In Version: lvm2-2.03.08-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1784695
Environment:
Last Closed: 2020-04-28 16:59:23 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-33959 0 None None None 2021-09-07 11:52:31 UTC
Red Hat Product Errata RHEA-2020:1881 0 None None None 2020-04-28 16:59:37 UTC

Comment 3 Corey Marthaler 2020-03-04 17:24:41 UTC
Marking verified in the latest rpms. All stripe raid reshape attempts fail (not just raid5) when stacked below a thinpool _tdata volume.

kernel-4.18.0-184.el8    BUILT: Tue Feb 25 21:37:02 CST 2020
lvm2-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
lvm2-libs-2.03.08-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020
device-mapper-event-libs-1.02.169-2.el8    BUILT: Mon Feb 24 11:21:38 CST 2020


[root@hayes-03 ~]# lvcreate -n pool_r4 --type raid4 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool_r4" created.
[root@hayes-03 ~]# lvcreate -n pool_r5 --type raid5 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool_r5" created.
[root@hayes-03 ~]# lvcreate -n pool_r6 --type raid6 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool_r6" created.
[root@hayes-03 ~]# lvcreate -n pool_r10 --type raid10 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool_r10" created.

[root@hayes-03 ~]# lvs -o +segtype
  LV       VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type  
  pool_r0  raid_vg rwi-a-r--- 912.00m                                                     raid0 
  pool_r10 raid_vg rwi-a-r--- 912.00m                                    100.00           raid10
  pool_r4  raid_vg rwi-a-r--- 912.00m                                    100.00           raid4 
  pool_r5  raid_vg rwi-a-r--- 912.00m                                    100.00           raid5 
  pool_r6  raid_vg rwi-a-r--- 912.00m                                    100.00           raid6 

# plus meta volumes
[root@hayes-03 ~]# lvs -o +segtype
  LV         VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type  
  pool_r0    raid_vg rwi-a-r--- 912.00m                                                     raid0 
  pool_r10   raid_vg rwi-a-r--- 912.00m                                    100.00           raid10
  pool_r4    raid_vg rwi-a-r--- 912.00m                                    100.00           raid4 
  pool_r5    raid_vg rwi-a-r--- 912.00m                                    100.00           raid5 
  pool_r6    raid_vg rwi-a-r--- 912.00m                                    100.00           raid6 
  poolmeta0  raid_vg -wi-a-----  12.00m                                                     linear
  poolmeta10 raid_vg -wi-a-----  12.00m                                                     linear
  poolmeta4  raid_vg -wi-a-----  12.00m                                                     linear
  poolmeta5  raid_vg -wi-a-----  12.00m                                                     linear
  poolmeta6  raid_vg -wi-a-----  12.00m                                                     linear

[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool_r0 --poolmetadata poolmeta0
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool_r0 and raid_vg/poolmeta0 to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool_r0 and raid_vg/poolmeta0? [y/n]: y
  Converted raid_vg/pool_r0 and raid_vg/poolmeta0 to thin pool.
[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool_r4 --poolmetadata poolmeta4
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool_r4 and raid_vg/poolmeta4 to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool_r4 and raid_vg/poolmeta4? [y/n]: y
  Converted raid_vg/pool_r4 and raid_vg/poolmeta4 to thin pool.
[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool_r5 --poolmetadata poolmeta5
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool_r5 and raid_vg/poolmeta5 to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool_r5 and raid_vg/poolmeta5? [y/n]: y
  Converted raid_vg/pool_r5 and raid_vg/poolmeta5 to thin pool.
[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool_r6 --poolmetadata poolmeta6
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool_r6 and raid_vg/poolmeta6 to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool_r6 and raid_vg/poolmeta6? [y/n]: y
  Converted raid_vg/pool_r6 and raid_vg/poolmeta6 to thin pool.
[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool_r10 --poolmetadata poolmeta10
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool_r10 and raid_vg/poolmeta10 to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool_r10 and raid_vg/poolmeta10? [y/n]: y
  Converted raid_vg/pool_r10 and raid_vg/poolmeta10 to thin pool.

[root@hayes-03 ~]# lvs -o +segtype
  LV       VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type     
  pool_r0  raid_vg twi-a-tz-- 912.00m             0.00   10.29                            thin-pool
  pool_r10 raid_vg twi-a-tz-- 912.00m             0.00   10.29                            thin-pool
  pool_r4  raid_vg twi-a-tz-- 912.00m             0.00   10.29                            thin-pool
  pool_r5  raid_vg twi-a-tz-- 912.00m             0.00   10.29                            thin-pool
  pool_r6  raid_vg twi-a-tz-- 912.00m             0.00   10.29                            thin-pool

# Reshape fails on *all* types of striped raids
[root@hayes-03 ~]# lvconvert raid_vg/pool_r0_tdata --stripes=5 --yes
  Command on LV raid_vg/pool_r0_tdata with invalid LV type striped.
  Command not permitted on LV raid_vg/pool_r0_tdata.
[root@hayes-03 ~]# lvconvert raid_vg/pool_r4_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_r4_tdata.
  Reshape request failed on LV raid_vg/pool_r4_tdata.
[root@hayes-03 ~]# lvconvert raid_vg/pool_r5_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_r5_tdata.
  Reshape request failed on LV raid_vg/pool_r5_tdata.
[root@hayes-03 ~]# lvconvert raid_vg/pool_r6_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_r6_tdata.
  Reshape request failed on LV raid_vg/pool_r6_tdata.
[root@hayes-03 ~]# lvconvert raid_vg/pool_r10_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_r10_tdata.
  Reshape request failed on LV raid_vg/pool_r10_tdata.

Comment 4 Corey Marthaler 2020-03-04 17:31:13 UTC
Verified the the pool meta volume (if striped raid) reshape attempt is also disallowed.


[root@hayes-03 ~]# lvcreate -n meta_r5 --type raid5 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "meta_r5" created.
[root@hayes-03 ~]# lvcreate -n pool -L900M raid_vg
  Logical volume "pool" created.

[root@hayes-03 ~]# lvs -a -o +devices
  LV                 VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                                            
  meta_r5            raid_vg rwi-a-r--- 912.00m                                    100.00           meta_r5_rimage_0(0),meta_r5_rimage_1(0),meta_r5_rimage_2(0),meta_r5_rimage_3(0),meta_r5_rimage_4(0)
  [meta_r5_rimage_0] raid_vg iwi-aor--- 228.00m                                                     /dev/sdb1(1)                                                                                       
  [meta_r5_rimage_1] raid_vg iwi-aor--- 228.00m                                                     /dev/sdc1(1)                                                                                       
  [meta_r5_rimage_2] raid_vg iwi-aor--- 228.00m                                                     /dev/sdd1(1)                                                                                       
  [meta_r5_rimage_3] raid_vg iwi-aor--- 228.00m                                                     /dev/sde1(1)                                                                                       
  [meta_r5_rimage_4] raid_vg iwi-aor--- 228.00m                                                     /dev/sdf1(1)                                                                                       
  [meta_r5_rmeta_0]  raid_vg ewi-aor---   4.00m                                                     /dev/sdb1(0)                                                                                       
  [meta_r5_rmeta_1]  raid_vg ewi-aor---   4.00m                                                     /dev/sdc1(0)                                                                                       
  [meta_r5_rmeta_2]  raid_vg ewi-aor---   4.00m                                                     /dev/sdd1(0)                                                                                       
  [meta_r5_rmeta_3]  raid_vg ewi-aor---   4.00m                                                     /dev/sde1(0)                                                                                       
  [meta_r5_rmeta_4]  raid_vg ewi-aor---   4.00m                                                     /dev/sdf1(0)                                                                                       
  pool               raid_vg -wi-a----- 900.00m                                                     /dev/sdb1(58)                                                                                      

[root@hayes-03 ~]# lvs -o +segtype
  LV      VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type  
  meta_r5 raid_vg rwi-a-r--- 912.00m                                    100.00           raid5 
  pool    raid_vg -wi-a----- 900.00m                                                     linear

[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool --poolmetadata raid_vg/meta_r5
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool and raid_vg/meta_r5 to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool and raid_vg/meta_r5? [y/n]: y
  Converted raid_vg/pool and raid_vg/meta_r5 to thin pool.

[root@hayes-03 ~]# lvs -a -o +devices
  LV                    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices                                                                                                           
  [lvol0_pmspare]       raid_vg ewi------- 912.00m                                                     /dev/sdb1(283)                                                                                                    
  pool                  raid_vg twi-a-tz-- 900.00m             0.00   1.76                             pool_tdata(0)                                                                                                     
  [pool_tdata]          raid_vg Twi-ao---- 900.00m                                                     /dev/sdb1(58)                                                                                                     
  [pool_tmeta]          raid_vg ewi-aor--- 912.00m                                    100.00           pool_tmeta_rimage_0(0),pool_tmeta_rimage_1(0),pool_tmeta_rimage_2(0),pool_tmeta_rimage_3(0),pool_tmeta_rimage_4(0)
  [pool_tmeta_rimage_0] raid_vg iwi-aor--- 228.00m                                                     /dev/sdb1(1)                                                                                                      
  [pool_tmeta_rimage_1] raid_vg iwi-aor--- 228.00m                                                     /dev/sdc1(1)                                                                                                      
  [pool_tmeta_rimage_2] raid_vg iwi-aor--- 228.00m                                                     /dev/sdd1(1)                                                                                                      
  [pool_tmeta_rimage_3] raid_vg iwi-aor--- 228.00m                                                     /dev/sde1(1)                                                                                                      
  [pool_tmeta_rimage_4] raid_vg iwi-aor--- 228.00m                                                     /dev/sdf1(1)                                                                                                      
  [pool_tmeta_rmeta_0]  raid_vg ewi-aor---   4.00m                                                     /dev/sdb1(0)                                                                                                      
  [pool_tmeta_rmeta_1]  raid_vg ewi-aor---   4.00m                                                     /dev/sdc1(0)                                                                                                      
  [pool_tmeta_rmeta_2]  raid_vg ewi-aor---   4.00m                                                     /dev/sdd1(0)                                                                                                      
  [pool_tmeta_rmeta_3]  raid_vg ewi-aor---   4.00m                                                     /dev/sde1(0)                                                                                                      
  [pool_tmeta_rmeta_4]  raid_vg ewi-aor---   4.00m                                                     /dev/sdf1(0)                                                                                                      

[root@hayes-03 ~]# lvconvert raid_vg/pool_tmeta --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_tmeta.
  Reshape request failed on LV raid_vg/pool_tmeta.

Comment 6 errata-xmlrpc 2020-04-28 16:59:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:1881


Note You need to log in before you can comment on or make changes to this bug.