RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1784695 - Do not allow reshape of a raid5 thinpool
Summary: Do not allow reshape of a raid5 thinpool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.7
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1796958
TreeView+ depends on / blocked
 
Reported: 2019-12-18 06:05 UTC by nikhil kshirsagar
Modified: 2021-09-03 12:50 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.02.186-7.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1796958 (view as bug list)
Environment:
Last Closed: 2020-03-31 20:04:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:1129 0 None None None 2020-03-31 20:05:22 UTC

Internal Links: 1782045

Description nikhil kshirsagar 2019-12-18 06:05:29 UTC
Description of problem:

Since we run into the problems described in https://bugzilla.redhat.com/show_bug.cgi?id=1782045 while attempting to reshape a raid5 thinpool, this bz is for disabling and not allowing that kind of operation until it is safe to do so. 

As of today, attempting a reshape of a raid5 thinpool hangs the lvconvert, and manual dmsetup resume of the underlying sublv's does resolve the lvconvert hang, but makes the pool unusable and inactive.

Version-Release number of selected component (if applicable):
lvm2-libs-2.02.186-2.el7.x86_64

How reproducible:
Steps to Reproduce:

[root@vm255-21 ~]# lvs
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap rhel_vm255-21 -wi-ao---- 512.00m   
                                                 
[root@vm255-21 ~]# pvs
  PV         VG            Fmt  Attr PSize    PFree   
  /dev/sda2  rhel_vm255-21 lvm2 a--    <4.00g       0 
  /dev/sdb                 lvm2 ---     1.00g    1.00g
  /dev/sdc                 lvm2 ---     1.00g    1.00g
  /dev/sdd   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sde   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sdf   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sdg   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sdh                 lvm2 ---     7.00g    7.00g
  /dev/sdi                 lvm2 ---    15.00g   15.00g
[root@vm255-21 ~]# vgextend raid_vg /dev/sdh /dev/sdi
  Volume group "raid_vg" successfully extended


[root@vm255-21 ~]# lvcreate -n pool --type raid5 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool" created.
[root@vm255-21 ~]# lvcreate -n poolmeta -L10M raid_vg
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "poolmeta" created.



[root@vm255-21 ~]# lvs
  LV       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool     raid_vg       rwi-a-r--- 912.00m                                    100.00          
  poolmeta raid_vg       -wi-a-----  12.00m                                                    
  root     rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap     rhel_vm255-21 -wi-ao---- 512.00m      


[root@vm255-21 ~]# lvconvert --thinpool raid_vg/pool --poolmetadata poolmeta
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool and raid_vg/poolmeta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool and raid_vg/poolmeta? [y/n]: y
  Converted raid_vg/pool and raid_vg/poolmeta to thin pool.

[root@vm255-21 ~]# lvs -a
  LV                    VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]       raid_vg       ewi-------  12.00m                                                    
  pool                  raid_vg       twi-a-tz-- 912.00m             0.00   10.29                           
  [pool_tdata]          raid_vg       rwi-aor--- 912.00m                                    100.00          
  [pool_tdata_rimage_0] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_1] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_2] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_3] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_4] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rmeta_0]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_1]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_2]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_3]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_4]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tmeta]          raid_vg       ewi-ao----  12.00m                                                    
  root                  rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap                  rhel_vm255-21 -wi-ao---- 512.00m   
                                                 
[root@vm255-21 ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree 
  raid_vg         6   1   0 wz--n- <25.98g 24.82g
  rhel_vm255-21   1   2   0 wz--n-  <4.00g     0 


[root@vm255-21 ~]# dmsetup info -c
Name                        Maj Min Stat Open Targ Event  UUID                                                                      
raid_vg-pool                253  14 L--w    0    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUEqA4sJmc0HYXWbhv7Tr15gaNlXQi6HZs-tpool
rhel_vm255--21-swap         253   1 L--w    2    1      0 LVM-c4nOU8A0djUEjKWmqEB1S12Gtlb2hvFdGZCu7Wrz6l3taLMWrXNIrsvObfo20Uuv      
rhel_vm255--21-root         253   0 L--w    1    1      0 LVM-c4nOU8A0djUEjKWmqEB1S12Gtlb2hvFdct3k0SwGjXFBZu03eheN5JO71TXWhTMV      
raid_vg-pool_tdata_rmeta_4  253  11 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUwwaEvq2IZDRDYgD8TC1qUAPYuO0cO3ji      
raid_vg-pool_tdata          253  13 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUINPWl3li0Lm12DvnJyQvS0Qi1MrvzmMy-tdata
raid_vg-pool_tdata_rmeta_3  253   9 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUZuDtoKgeC3rFS5lDyElmgr3p1OLBUAm7      
raid_vg-pool_tdata_rmeta_2  253   7 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUSPsQNSxkyAkzgi97cBCCSNPnGJFMciDC      
raid_vg-pool_tmeta          253   2 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUbqs72svoXtR6YwM5ZlyIUa0asepDJVT6-tmeta
raid_vg-pool_tdata_rmeta_1  253   5 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUcmGSsep0OfUnC5tqUcatfxofUwHZmwGh      
raid_vg-pool_tdata_rimage_4 253  12 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUplnCmhlJlWGXh2voNwuntdBOZf5wStiD      
raid_vg-pool_tdata_rmeta_0  253   3 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUt96HkMKAYGcpYKQOX6knvUBeHMRAyQEg      
raid_vg-pool_tdata_rimage_3 253  10 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUC3wuOW3bzI7AN5ZGdHeJXeklwEw2itLi      
raid_vg-pool_tdata_rimage_2 253   8 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhU8d3dLRBWQqV7TTKpMoMeAHwsC2BW3E4S      
raid_vg-pool_tdata_rimage_1 253   6 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUNhhTVIN92c5v3rq7g3TgPILoH3CpTos6      
raid_vg-pool_tdata_rimage_0 253   4 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhULwqwVnr7jxj6zn0he8WmHaOalKvVrYGR      

[root@vm255-21 ~]# dmsetup table
raid_vg-pool: 0 1867776 thin-pool 253:2 253:13 128 0 0 
rhel_vm255--21-swap: 0 1048576 linear 8:2 2048
rhel_vm255--21-root: 0 7331840 linear 8:2 1050624
raid_vg-pool_tdata_rmeta_4: 0 8192 linear 8:112 2048
raid_vg-pool_tdata: 0 1867776 raid raid5_ls 3 128 region_size 4096 5 253:3 253:4 253:5 253:6 253:7 253:8 253:9 253:10 253:11 253:12
raid_vg-pool_tdata_rmeta_3: 0 8192 linear 8:96 2048
raid_vg-pool_tdata_rmeta_2: 0 8192 linear 8:80 2048
raid_vg-pool_tmeta: 0 24576 linear 8:48 477184
raid_vg-pool_tdata_rmeta_1: 0 8192 linear 8:64 2048
raid_vg-pool_tdata_rimage_4: 0 466944 linear 8:112 10240
raid_vg-pool_tdata_rmeta_0: 0 8192 linear 8:48 2048
raid_vg-pool_tdata_rimage_3: 0 466944 linear 8:96 10240
raid_vg-pool_tdata_rimage_2: 0 466944 linear 8:80 10240
raid_vg-pool_tdata_rimage_1: 0 466944 linear 8:64 10240
raid_vg-pool_tdata_rimage_0: 0 466944 linear 8:48 10240
[root@vm255-21 ~]# lvconvert --type raid5 raid_vg/pool_tdata --stripes=5
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active and open logical volume raid_vg/pool_tdata will grow it from 228 to 285 extents!
  Run "lvresize -l228 raid_vg/pool_tdata" to shrink it or use the additional capacity.
Are you sure you want to add 1 images to raid5 LV raid_vg/pool_tdata? [y/n]: y
  Internal error: Performing unsafe table load while 15 device(s) are known to be suspended:  (253:13) 


^C

^C^C^C^C^C

<hung lvconvert>



(after resuming all tmeta devices and then the tdata and pool device in second terminal... ) 

  device-mapper: resume ioctl on  (253:14) failed: Invalid argument
  Unable to resume raid_vg-pool (253:14).
  Problem reactivating logical volume raid_vg/pool.
  Reshape request failed on LV raid_vg/pool_tdata.
  Releasing activation in critical section.
  libdevmapper exiting with 1 device(s) still suspended.
[root@vm255-21 ~]# lvs
  WARNING: Cannot find matching thin-pool segment for raid_vg/pool.
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool raid_vg       twi-XXtzX-   1.13g                                                    
  root rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap rhel_vm255-21 -wi-ao---- 512.00m       


tried Deactivating and reactivating, but..

                                             
[root@vm255-21 ~]# lvchange -an raid_vg/pool
[root@vm255-21 ~]# lvchange -ay raid_vg/pool
  device-mapper: reload ioctl on  (253:16) failed: Invalid argument
[root@vm255-21 ~]# lvs
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool raid_vg       twi---tz--   1.13g                                                    
  root rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap rhel_vm255-21 -wi-ao---- 512.00m                                                    
[root@vm255-21 ~]# 

So while the hang is resolved, the pool doesn't seem usable.

Additional info:
For extension of such raid5 thinpools, we have suggested to customers to add new larger PV's into the VG, then pvmove the sublv's to the new storage and then lvextend the pool.

Comment 6 Zdenek Kabelac 2020-01-14 09:24:22 UTC
Blocked (until correct solution is found) by upstream commit:

https://www.redhat.com/archives/lvm-devel/2020-January/msg00008.html

Comment 8 Marian Csontos 2020-01-15 15:20:55 UTC
This turned out to be more restrictive than needed. Being worked on.

Comment 10 Zdenek Kabelac 2020-01-31 12:10:48 UTC
As followup for comment 6  - restriction has been fuhrer limited by this patch (on stable branch):

https://www.redhat.com/archives/lvm-devel/2020-January/msg00032.html

Comment 11 Corey Marthaler 2020-02-04 16:31:17 UTC
The very scenario given in comment #0 appears unchanged with the latest rpms. Am I missing something? Please post devel unit test results.

3.10.0-1124.el7.x86_64   BUILT: Thu 23 Jan 2020 10:09:44 AM CST
lvm2-2.02.186-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
lvm2-libs-2.02.186-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
lvm2-cluster-2.02.186-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
lvm2-lockd-2.02.186-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
lvm2-python-boom-0.9-24.el7    BUILT: Fri Jan 31 12:27:55 CST 2020
cmirror-2.02.186-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
device-mapper-1.02.164-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
device-mapper-libs-1.02.164-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
device-mapper-event-1.02.164-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020
device-mapper-event-libs-1.02.164-6.el7    BUILT: Fri Jan 31 12:26:22 CST 2020



[root@hayes-03 ~]# vgcreate raid_vg /dev/sd[bcdefgh]1
  Volume group "raid_vg" successfully created

[root@hayes-03 ~]# lvcreate -n pool --type raid5 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool" created.

[root@hayes-03 ~]# lvcreate -n poolmeta -L10M raid_vg
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "poolmeta" created.

[root@hayes-03 ~]# lvs -a -o +devices
  LV              VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Devices
  pool            raid_vg rwi-a-r--- 912.00m                                    100.00   pool_rimage_0(0),pool_rimage_1(0),pool_rimage_2(0),pool_rimage_3(0),pool_rimage_4(0)
  [pool_rimage_0] raid_vg iwi-aor--- 228.00m                                             /dev/sdb1(1)
  [pool_rimage_1] raid_vg iwi-aor--- 228.00m                                             /dev/sdc1(1)
  [pool_rimage_2] raid_vg iwi-aor--- 228.00m                                             /dev/sdd1(1)
  [pool_rimage_3] raid_vg iwi-aor--- 228.00m                                             /dev/sde1(1)
  [pool_rimage_4] raid_vg iwi-aor--- 228.00m                                             /dev/sdf1(1)
  [pool_rmeta_0]  raid_vg ewi-aor---   4.00m                                             /dev/sdb1(0)
  [pool_rmeta_1]  raid_vg ewi-aor---   4.00m                                             /dev/sdc1(0)
  [pool_rmeta_2]  raid_vg ewi-aor---   4.00m                                             /dev/sdd1(0)
  [pool_rmeta_3]  raid_vg ewi-aor---   4.00m                                             /dev/sde1(0)
  [pool_rmeta_4]  raid_vg ewi-aor---   4.00m                                             /dev/sdf1(0)
  poolmeta        raid_vg -wi-a-----  12.00m                                             /dev/sdb1(58)

[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool --poolmetadata poolmeta
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool and raid_vg/poolmeta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool and raid_vg/poolmeta? [y/n]: y
  Converted raid_vg/pool and raid_vg/poolmeta to thin pool.

[root@hayes-03 ~]# lvs -a -o +devices
  LV                    VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Devices
  [lvol0_pmspare]       raid_vg ewi-------  12.00m                                             /dev/sdb1(61)
  pool                  raid_vg twi-a-tz-- 912.00m             0.00   10.29                    pool_tdata(0)
  [pool_tdata]          raid_vg rwi-aor--- 912.00m                                    100.00   pool_tdata_rimage_0(0),pool_tdata_rimage_1(0),pool_tdata_rimage_2(0),pool_tdata_rimage_3(0),pool_tdata_rimage_4(0)
  [pool_tdata_rimage_0] raid_vg iwi-aor--- 228.00m                                             /dev/sdb1(1)
  [pool_tdata_rimage_1] raid_vg iwi-aor--- 228.00m                                             /dev/sdc1(1)
  [pool_tdata_rimage_2] raid_vg iwi-aor--- 228.00m                                             /dev/sdd1(1)
  [pool_tdata_rimage_3] raid_vg iwi-aor--- 228.00m                                             /dev/sde1(1)
  [pool_tdata_rimage_4] raid_vg iwi-aor--- 228.00m                                             /dev/sdf1(1)
  [pool_tdata_rmeta_0]  raid_vg ewi-aor---   4.00m                                             /dev/sdb1(0)
  [pool_tdata_rmeta_1]  raid_vg ewi-aor---   4.00m                                             /dev/sdc1(0)
  [pool_tdata_rmeta_2]  raid_vg ewi-aor---   4.00m                                             /dev/sdd1(0)
  [pool_tdata_rmeta_3]  raid_vg ewi-aor---   4.00m                                             /dev/sde1(0)
  [pool_tdata_rmeta_4]  raid_vg ewi-aor---   4.00m                                             /dev/sdf1(0)
  [pool_tmeta]          raid_vg ewi-ao----  12.00m                                             /dev/sdb1(58)

[root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_tdata --stripes=5
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active and open logical volume raid_vg/pool_tdata will grow it from 228 to 285 extents!
  Run "lvresize -l228 raid_vg/pool_tdata" to shrink it or use the additional capacity.
Are you sure you want to add 1 images to raid5 LV raid_vg/pool_tdata? [y/n]: y
  Internal error: Performing unsafe table load while 15 device(s) are known to be suspended:  (253:11) 
  [Deadlock]

Feb  4 10:24:50 hayes-03 kernel: INFO: task lvconvert:21636 blocked for more than 120 seconds.
Feb  4 10:24:50 hayes-03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Feb  4 10:24:50 hayes-03 kernel: lvconvert       D ffff98febcaac1c0     0 21636   2354 0x00000080
Feb  4 10:24:50 hayes-03 kernel: Call Trace:
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa92bc60c>] ? __queue_work+0x13c/0x3f0
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa9985d89>] schedule+0x29/0x70
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa9983891>] schedule_timeout+0x221/0x2d0
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc0376e92>] ? dm_make_request+0x172/0x1a0 [dm_mod]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa9554437>] ? generic_make_request+0x147/0x380
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa998613d>] wait_for_completion+0xfd/0x140
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa92db990>] ? wake_up_state+0x20/0x20
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa948a38d>] submit_bio_wait+0x6d/0x90
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa97a5205>] sync_page_io+0x75/0x100
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc079e9b8>] read_disk_sb+0x38/0x80 [dm_raid]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc07a03f4>] raid_ctr+0x744/0x17f0 [dm_raid]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc0379ded>] dm_table_add_target+0x17d/0x440 [dm_mod]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc037dd37>] table_load+0x157/0x390 [dm_mod]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc037f1cb>] ctl_ioctl+0x24b/0x640 [dm_mod]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc037dbe0>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffc037f5ce>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa94628a0>] do_vfs_ioctl+0x3a0/0x5b0
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa9462b51>] SyS_ioctl+0xa1/0xc0
Feb  4 10:24:50 hayes-03 kernel: [<ffffffffa9992ed2>] system_call_fastpath+0x25/0x2a

Comment 12 Zdenek Kabelac 2020-02-04 16:58:04 UTC
Ok - we were trapping change  like raid1 -> raid5  and similar.

But patch from comment 10  has not catched   raid5 stripeX   -> stripeY.

It will need another small blocking patch.

Comment 13 Marian Csontos 2020-02-10 14:59:43 UTC
Fixed by commit 253d10f840682f85dad0e4c29f55ff50f94792fa on stable-2.02 branch.

Comment 14 Corey Marthaler 2020-02-10 16:33:32 UTC
Basic scenario in comment #0 now properly disallows stacked reshape in the latest rpms. Continuing with additional conversion/reshape testing...

lvm2-2.02.186-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
lvm2-libs-2.02.186-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
lvm2-cluster-2.02.186-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
lvm2-lockd-2.02.186-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-libs-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-event-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-event-libs-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020


[root@hayes-03 ~]# vgcreate raid_vg /dev/sd[bcdefgh]1
  Volume group "raid_vg" successfully created
[root@hayes-03 ~]# lvcreate -n pool --type raid5 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool" created.
[root@hayes-03 ~]# lvcreate -n poolmeta -L10M raid_vg
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "poolmeta" created.
[root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool --poolmetadata poolmeta
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool and raid_vg/poolmeta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool and raid_vg/poolmeta? [y/n]: y
  Converted raid_vg/pool and raid_vg/poolmeta to thin pool.
[root@hayes-03 ~]# lvs -a -o +devices
  LV                    VG      Attr       LSize   Pool Origin Data%  Meta%   Cpy%Sync Devices
  [lvol0_pmspare]       raid_vg ewi-------  12.00m                                     /dev/sdb1(61)
  pool                  raid_vg twi-a-tz-- 912.00m             0.00   10.29            pool_tdata(0)
  [pool_tdata]          raid_vg rwi-aor--- 912.00m                            100.00   pool_tdata_rimage_0(0),pool_tdata_rimage_1(0),pool_tdata_rimage_2(0),pool_tdata_rimage_3(0),pool_tdata_rimage_4(0)
  [pool_tdata_rimage_0] raid_vg iwi-aor--- 228.00m                                     /dev/sdb1(1)
  [pool_tdata_rimage_1] raid_vg iwi-aor--- 228.00m                                     /dev/sdc1(1)
  [pool_tdata_rimage_2] raid_vg iwi-aor--- 228.00m                                     /dev/sdd1(1)
  [pool_tdata_rimage_3] raid_vg iwi-aor--- 228.00m                                     /dev/sde1(1)
  [pool_tdata_rimage_4] raid_vg iwi-aor--- 228.00m                                     /dev/sdf1(1)
  [pool_tdata_rmeta_0]  raid_vg ewi-aor---   4.00m                                     /dev/sdb1(0)
  [pool_tdata_rmeta_1]  raid_vg ewi-aor---   4.00m                                     /dev/sdc1(0)
  [pool_tdata_rmeta_2]  raid_vg ewi-aor---   4.00m                                     /dev/sdd1(0)
  [pool_tdata_rmeta_3]  raid_vg ewi-aor---   4.00m                                     /dev/sde1(0)
  [pool_tdata_rmeta_4]  raid_vg ewi-aor---   4.00m                                     /dev/sdf1(0)
  [pool_tmeta]          raid_vg ewi-ao----  12.00m                                     /dev/sdb1(58)


[root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_tdata --stripes=5
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_tdata.
  Reshape request failed on LV raid_vg/pool_tdata.

Comment 15 Corey Marthaler 2020-02-12 18:00:53 UTC
Marking this bug verified in the latest build.

3.10.0-1126.1.el7.x86_64
lvm2-2.02.186-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
lvm2-libs-2.02.186-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-libs-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-event-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020
device-mapper-event-libs-1.02.164-7.el7    BUILT: Mon Feb 10 09:04:11 CST 2020


Normal non stacked takerover/reshape regression testing passes. Stacked reshape attempts are no longer allowed. That said a few stacked "takeover" operations are still allowed, namely raid1 and raid10.

# Takeover operations:
[root@hayes-03 ~]# lvs -o +segtype
  LV       VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type     
  pool_r1  raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool
  pool_r10 raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool
  pool_r4  raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool
  pool_r6  raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool


# raid1 -> raid5 still works
[root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r1_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  --stripes not allowed for LV raid_vg/pool_r1_tdata when converting from raid1 to raid5.
  Logical volume raid_vg/pool_r1_tdata successfully converted.
# "invalid"
[root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r4_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Replaced LV type raid5 (same as raid5_ls) with possible type raid5_n.
  Repeat this command to convert to raid5 after an interim conversion has finished.
  Invalid conversion request on raid_vg/pool_r4_tdata.
# "invalid"
[root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r6_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Replaced LV type raid5 (same as raid5_ls) with possible type raid6_ls_6.
  Repeat this command to convert to raid5 after an interim conversion has finished.
  Invalid conversion request on raid_vg/pool_r6_tdata.
# raid10 -> raid5 interim still works.
[root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r10_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Replaced LV type raid5 (same as raid5_ls) with possible type raid0_meta.
  Repeat this command to convert to raid5 after an interim conversion has finished.
  WARNING: ignoring --stripes option on takeover of raid_vg/pool_r10_tdata (reshape afterwards).
  Logical volume raid_vg/pool_r10_tdata successfully converted.


# Reshape operations:
[root@hayes-03 ~]# lvs -o +segtype
  LV       VG      Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type     
  pool_r1  raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool
  pool_r10 raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool
  pool_r4  raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool
  pool_r6  raid_vg twi-a-tz-- 900.00m             0.00   10.29                            thin-pool

# All reshape operations no fail
[root@hayes-03 ~]# lvconvert --type raid4 raid_vg/pool_r4_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_r4_tdata.
  Reshape request failed on LV raid_vg/pool_r4_tdata.
[root@hayes-03 ~]# lvconvert --type raid6 raid_vg/pool_r6_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_r6_tdata.
  Reshape request failed on LV raid_vg/pool_r6_tdata.
[root@hayes-03 ~]# lvconvert --type raid10 raid_vg/pool_r10_tdata --stripes=5 --yes
  Using default stripesize 64.00 KiB.
  Unable to convert stacked volume raid_vg/pool_r10_tdata.
  Reshape request failed on LV raid_vg/pool_r10_tdata.

Comment 17 errata-xmlrpc 2020-03-31 20:04:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:1129


Note You need to log in before you can comment on or make changes to this bug.