RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1782045 - reshape of a raid5 thinpool results in a hung lvconvert with an error " Internal error: Performing unsafe table load while XX device(s) are known to be suspended"
Summary: reshape of a raid5 thinpool results in a hung lvconvert with an error " Inte...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: ---
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: rc
: ---
Assignee: Heinz Mauelshagen
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1785670
Blocks: 1439399
TreeView+ depends on / blocked
 
Reported: 2019-12-11 04:03 UTC by nikhil kshirsagar
Modified: 2023-09-18 00:19 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-04-29 07:28:11 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
lvconvert -vvvv output for the hanging conversion (326.22 KB, text/plain)
2019-12-11 20:20 UTC, Heinz Mauelshagen
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1784695 0 urgent CLOSED Do not allow reshape of a raid5 thinpool 2021-09-03 12:50:39 UTC

Internal Links: 1788596

Description nikhil kshirsagar 2019-12-11 04:03:54 UTC
Description of problem:

Similar to https://bugzilla.redhat.com/show_bug.cgi?id=1365286 but raid5 related lvconvert hang when reshape attempted, when the raid5 has been converted to thinpool type. Note, there are no thinlv's created in this setup at all so far.

A specific question I have is, what is the correct approach to reshaping such a raid5 thinpool? I've also tried creating the pool first, then lvconverting it to raid5 (the command did raid1->raid5, asking me to run the lvconvert again..) and then attempting the reshape. Same results. 

Version-Release number of selected component (if applicable):
lvm2-2.02.186-2.el7.x86_64


Steps to Reproduce:

[root@vm255-21 ~]# lvs
  LV   VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap rhel_vm255-21 -wi-ao---- 512.00m   
                                                 
[root@vm255-21 ~]# pvs
  PV         VG            Fmt  Attr PSize    PFree   
  /dev/sda2  rhel_vm255-21 lvm2 a--    <4.00g       0 
  /dev/sdb                 lvm2 ---     1.00g    1.00g
  /dev/sdc                 lvm2 ---     1.00g    1.00g
  /dev/sdd   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sde   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sdf   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sdg   raid_vg       lvm2 a--  1020.00m 1020.00m
  /dev/sdh                 lvm2 ---     7.00g    7.00g
  /dev/sdi                 lvm2 ---    15.00g   15.00g
[root@vm255-21 ~]# vgextend raid_vg /dev/sdh /dev/sdi
  Volume group "raid_vg" successfully extended


[root@vm255-21 ~]# lvcreate -n pool --type raid5 --stripes=4 -L900M raid_vg
  Using default stripesize 64.00 KiB.
  Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents).
  Logical volume "pool" created.
[root@vm255-21 ~]# lvcreate -n poolmeta -L10M raid_vg
  Rounding up size to full physical extent 12.00 MiB
  Logical volume "poolmeta" created.



[root@vm255-21 ~]# lvs
  LV       VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool     raid_vg       rwi-a-r--- 912.00m                                    100.00          
  poolmeta raid_vg       -wi-a-----  12.00m                                                    
  root     rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap     rhel_vm255-21 -wi-ao---- 512.00m      


[root@vm255-21 ~]# lvconvert --thinpool raid_vg/pool --poolmetadata poolmeta
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  WARNING: Converting raid_vg/pool and raid_vg/poolmeta to thin pool's data and metadata volumes with metadata wiping.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Do you really want to convert raid_vg/pool and raid_vg/poolmeta? [y/n]: y
  Converted raid_vg/pool and raid_vg/poolmeta to thin pool.

[root@vm255-21 ~]# lvs -a
  LV                    VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  [lvol0_pmspare]       raid_vg       ewi-------  12.00m                                                    
  pool                  raid_vg       twi-a-tz-- 912.00m             0.00   10.29                           
  [pool_tdata]          raid_vg       rwi-aor--- 912.00m                                    100.00          
  [pool_tdata_rimage_0] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_1] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_2] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_3] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rimage_4] raid_vg       iwi-aor--- 228.00m                                                    
  [pool_tdata_rmeta_0]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_1]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_2]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_3]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tdata_rmeta_4]  raid_vg       ewi-aor---   4.00m                                                    
  [pool_tmeta]          raid_vg       ewi-ao----  12.00m                                                    
  root                  rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap                  rhel_vm255-21 -wi-ao---- 512.00m   
                                                 
[root@vm255-21 ~]# vgs
  VG            #PV #LV #SN Attr   VSize   VFree 
  raid_vg         6   1   0 wz--n- <25.98g 24.82g
  rhel_vm255-21   1   2   0 wz--n-  <4.00g     0 


[root@vm255-21 ~]# dmsetup info -c
Name                        Maj Min Stat Open Targ Event  UUID                                                                      
raid_vg-pool                253  14 L--w    0    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUEqA4sJmc0HYXWbhv7Tr15gaNlXQi6HZs-tpool
rhel_vm255--21-swap         253   1 L--w    2    1      0 LVM-c4nOU8A0djUEjKWmqEB1S12Gtlb2hvFdGZCu7Wrz6l3taLMWrXNIrsvObfo20Uuv      
rhel_vm255--21-root         253   0 L--w    1    1      0 LVM-c4nOU8A0djUEjKWmqEB1S12Gtlb2hvFdct3k0SwGjXFBZu03eheN5JO71TXWhTMV      
raid_vg-pool_tdata_rmeta_4  253  11 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUwwaEvq2IZDRDYgD8TC1qUAPYuO0cO3ji      
raid_vg-pool_tdata          253  13 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUINPWl3li0Lm12DvnJyQvS0Qi1MrvzmMy-tdata
raid_vg-pool_tdata_rmeta_3  253   9 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUZuDtoKgeC3rFS5lDyElmgr3p1OLBUAm7      
raid_vg-pool_tdata_rmeta_2  253   7 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUSPsQNSxkyAkzgi97cBCCSNPnGJFMciDC      
raid_vg-pool_tmeta          253   2 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUbqs72svoXtR6YwM5ZlyIUa0asepDJVT6-tmeta
raid_vg-pool_tdata_rmeta_1  253   5 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUcmGSsep0OfUnC5tqUcatfxofUwHZmwGh      
raid_vg-pool_tdata_rimage_4 253  12 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUplnCmhlJlWGXh2voNwuntdBOZf5wStiD      
raid_vg-pool_tdata_rmeta_0  253   3 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUt96HkMKAYGcpYKQOX6knvUBeHMRAyQEg      
raid_vg-pool_tdata_rimage_3 253  10 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUC3wuOW3bzI7AN5ZGdHeJXeklwEw2itLi      
raid_vg-pool_tdata_rimage_2 253   8 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhU8d3dLRBWQqV7TTKpMoMeAHwsC2BW3E4S      
raid_vg-pool_tdata_rimage_1 253   6 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUNhhTVIN92c5v3rq7g3TgPILoH3CpTos6      
raid_vg-pool_tdata_rimage_0 253   4 L--w    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhULwqwVnr7jxj6zn0he8WmHaOalKvVrYGR      

[root@vm255-21 ~]# dmsetup table
raid_vg-pool: 0 1867776 thin-pool 253:2 253:13 128 0 0 
rhel_vm255--21-swap: 0 1048576 linear 8:2 2048
rhel_vm255--21-root: 0 7331840 linear 8:2 1050624
raid_vg-pool_tdata_rmeta_4: 0 8192 linear 8:112 2048
raid_vg-pool_tdata: 0 1867776 raid raid5_ls 3 128 region_size 4096 5 253:3 253:4 253:5 253:6 253:7 253:8 253:9 253:10 253:11 253:12
raid_vg-pool_tdata_rmeta_3: 0 8192 linear 8:96 2048
raid_vg-pool_tdata_rmeta_2: 0 8192 linear 8:80 2048
raid_vg-pool_tmeta: 0 24576 linear 8:48 477184
raid_vg-pool_tdata_rmeta_1: 0 8192 linear 8:64 2048
raid_vg-pool_tdata_rimage_4: 0 466944 linear 8:112 10240
raid_vg-pool_tdata_rmeta_0: 0 8192 linear 8:48 2048
raid_vg-pool_tdata_rimage_3: 0 466944 linear 8:96 10240
raid_vg-pool_tdata_rimage_2: 0 466944 linear 8:80 10240
raid_vg-pool_tdata_rimage_1: 0 466944 linear 8:64 10240
raid_vg-pool_tdata_rimage_0: 0 466944 linear 8:48 10240
[root@vm255-21 ~]# lvconvert --type raid5 raid_vg/pool_tdata --stripes=5
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active and open logical volume raid_vg/pool_tdata will grow it from 228 to 285 extents!
  Run "lvresize -l228 raid_vg/pool_tdata" to shrink it or use the additional capacity.
Are you sure you want to add 1 images to raid5 LV raid_vg/pool_tdata? [y/n]: y
  Internal error: Performing unsafe table load while 15 device(s) are known to be suspended:  (253:13) 


^C

^C^C^C^C^C

-----------

Its lvconvert thats hung, not the server. I logged in from another terminal, and gathered the following output,

[root@vm255-21 ~]# dmsetup info -c
Name                        Maj Min Stat Open Targ Event  UUID                                                                      
raid_vg-pool                253  14 LIsw    0    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUEqA4sJmc0HYXWbhv7Tr15gaNlXQi6HZs-tpool
rhel_vm255--21-swap         253   1 L--w    2    1      0 LVM-c4nOU8A0djUEjKWmqEB1S12Gtlb2hvFdGZCu7Wrz6l3taLMWrXNIrsvObfo20Uuv      
rhel_vm255--21-root         253   0 L--w    1    1      0 LVM-c4nOU8A0djUEjKWmqEB1S12Gtlb2hvFdct3k0SwGjXFBZu03eheN5JO71TXWhTMV      
raid_vg-pool_tdata_rmeta_5  253  15 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUsyInLQ1x4Nv8UrD8lDaKTGe2R10NL4a1      
raid_vg-pool_tdata_rmeta_4  253  11 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUwwaEvq2IZDRDYgD8TC1qUAPYuO0cO3ji      
raid_vg-pool_tdata          253  13 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUINPWl3li0Lm12DvnJyQvS0Qi1MrvzmMy-tdata
raid_vg-pool_tdata_rmeta_3  253   9 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUZuDtoKgeC3rFS5lDyElmgr3p1OLBUAm7      
raid_vg-pool_tdata_rmeta_2  253   7 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUSPsQNSxkyAkzgi97cBCCSNPnGJFMciDC      
raid_vg-pool_tmeta          253   2 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUbqs72svoXtR6YwM5ZlyIUa0asepDJVT6-tmeta
raid_vg-pool_tdata_rimage_5 253  16 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUf5m3uzCUiNgG7birWWruSSjTRP3VdhRF      
raid_vg-pool_tdata_rmeta_1  253   5 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUcmGSsep0OfUnC5tqUcatfxofUwHZmwGh      
raid_vg-pool_tdata_rimage_4 253  12 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUplnCmhlJlWGXh2voNwuntdBOZf5wStiD      
raid_vg-pool_tdata_rmeta_0  253   3 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUt96HkMKAYGcpYKQOX6knvUBeHMRAyQEg      
raid_vg-pool_tdata_rimage_3 253  10 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUC3wuOW3bzI7AN5ZGdHeJXeklwEw2itLi      
raid_vg-pool_tdata_rimage_2 253   8 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhU8d3dLRBWQqV7TTKpMoMeAHwsC2BW3E4S      
raid_vg-pool_tdata_rimage_1 253   6 L-sw    1    1      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhUNhhTVIN92c5v3rq7g3TgPILoH3CpTos6      
raid_vg-pool_tdata_rimage_0 253   4 L-sw    1    2      0 LVM-px133RvPuHK2C842QsmknRJ8WE9wHkhULwqwVnr7jxj6zn0he8WmHaOalKvVrYGR      
[root@vm255-21 ~]# dmsetup table
raid_vg-pool: 0 1867776 thin-pool 253:2 253:13 128 0 0 
rhel_vm255--21-swap: 0 1048576 linear 8:2 2048
rhel_vm255--21-root: 0 7331840 linear 8:2 1050624
raid_vg-pool_tdata_rmeta_5: 0 8192 linear 8:128 2048
raid_vg-pool_tdata_rmeta_4: 0 8192 linear 8:112 2048
raid_vg-pool_tdata: 0 2334720 raid raid5_ls 7 128 region_size 4096 6 253:3 253:4 253:5 253:6 253:7 253:8 253:9 253:10 253:11 253:12 253:15 253:16
raid_vg-pool_tdata_rmeta_3: 0 8192 linear 8:96 2048
raid_vg-pool_tdata_rmeta_2: 0 8192 linear 8:80 2048
raid_vg-pool_tmeta: 0 24576 linear 8:48 477184
raid_vg-pool_tdata_rimage_5: 0 475136 linear 8:128 10240
raid_vg-pool_tdata_rmeta_1: 0 8192 linear 8:64 2048
raid_vg-pool_tdata_rimage_4: 0 475136 linear 8:112 10240
raid_vg-pool_tdata_rmeta_0: 0 8192 linear 8:48 2048
raid_vg-pool_tdata_rimage_3: 0 475136 linear 8:96 10240
raid_vg-pool_tdata_rimage_2: 0 475136 linear 8:80 10240
raid_vg-pool_tdata_rimage_1: 0 475136 linear 8:64 10240
raid_vg-pool_tdata_rimage_0: 0 466944 linear 8:48 10240
raid_vg-pool_tdata_rimage_0: 466944 8192 linear 8:48 526336
[root@vm255-21 ~]# ps -eaf | grep lvconvert
root     12632 12282  0 22:41 pts/0    00:00:00 lvconvert --type raid5 raid_vg/pool_tdata --stripes=5
root     12706 12690  0 22:43 pts/1    00:00:00 grep --color=auto lvconvert
[root@vm255-21 ~]# kill -9 12632
[root@vm255-21 ~]# ps -aux | grep lvconvert
root     12632  0.0  0.8 186324 31572 pts/0    D<L+ 22:41   0:00 lvconvert --type raid5 raid_vg/pool_tdata --stripes=5
root     12708  0.0  0.0 112712   972 pts/1    S+   22:43   0:00 grep --color=auto lvconvert
[root@vm255-21 ~]# vi /var/log/messages
[root@vm255-21 ~]# tail 0f /var/log/messages
tail: cannot open ‘0f’ for reading: No such file or directory
==> /var/log/messages <==
Dec 10 22:42:28 vm255-21 kernel: md/raid:mdX: device dm-4 operational as raid disk 0
Dec 10 22:42:28 vm255-21 kernel: md/raid:mdX: device dm-6 operational as raid disk 1
Dec 10 22:42:28 vm255-21 kernel: md/raid:mdX: device dm-8 operational as raid disk 2
Dec 10 22:42:28 vm255-21 kernel: md/raid:mdX: device dm-10 operational as raid disk 3
Dec 10 22:42:28 vm255-21 kernel: md/raid:mdX: device dm-12 operational as raid disk 4
Dec 10 22:42:28 vm255-21 kernel: md/raid:mdX: raid level 5 active with 5 out of 5 devices, algorithm 2
Dec 10 22:42:28 vm255-21 lvm[12428]: No longer monitoring RAID device raid_vg-pool_tdata for events.
Dec 10 22:42:28 vm255-21 dmeventd[12428]: No longer monitoring thin pool raid_vg-pool.
Dec 10 22:42:57 vm255-21 systemd: Started Session 22 of user root.
Dec 10 22:42:57 vm255-21 systemd-logind: New session 22 of user root.



Additional info:
Sent email to Heinz and Mcsontos and also discussed with Zdenek. 

p.s - A reshape without the thinpool in the picture seems to work fine -

[root@vm255-21 ~]# vgcreate raid_vg /dev/sdd /dev/sde /dev/sdf /dev/sdg
  Volume group "raid_vg" successfully created

[root@vm255-21 ~]# lvcreate --type raid5 -L200M raid_vg
  Using default stripesize 64.00 KiB.
  Logical volume "lvol0" created.

[root@vm255-21 ~]# lvs -a
  LV               VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0            raid_vg       rwi-a-r--- 200.00m                                    100.00          
  [lvol0_rimage_0] raid_vg       iwi-aor--- 100.00m                                                    
  [lvol0_rimage_1] raid_vg       iwi-aor--- 100.00m                                                    
  [lvol0_rimage_2] raid_vg       iwi-aor--- 100.00m                                                    
  [lvol0_rmeta_0]  raid_vg       ewi-aor---   4.00m                                                    
  [lvol0_rmeta_1]  raid_vg       ewi-aor---   4.00m                                                    
  [lvol0_rmeta_2]  raid_vg       ewi-aor---   4.00m                                                    
  root             rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap             rhel_vm255-21 -wi-ao---- 512.00m                                                    
[root@vm255-21 ~]# lvconvert --stripes 3 raid_vg/lvol0
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active logical volume raid_vg/lvol0 will grow it from 50 to 75 extents!
  Run "lvresize -l50 raid_vg/lvol0" to shrink it or use the additional capacity.
Are you sure you want to add 1 images to raid5 LV raid_vg/lvol0? [y/n]: y
  Logical volume raid_vg/lvol0 successfully converted.
[root@vm255-21 ~]# lvs -a
  LV               VG            Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lvol0            raid_vg       rwi-a-r--- 300.00m                                    100.00          
  [lvol0_rimage_0] raid_vg       iwi-aor--- 104.00m                                                    
  [lvol0_rimage_1] raid_vg       iwi-aor--- 104.00m                                                    
  [lvol0_rimage_2] raid_vg       iwi-aor--- 104.00m                                                    
  [lvol0_rimage_3] raid_vg       iwi-aor--- 104.00m                                                    
  [lvol0_rmeta_0]  raid_vg       ewi-aor---   4.00m                                                    
  [lvol0_rmeta_1]  raid_vg       ewi-aor---   4.00m                                                    
  [lvol0_rmeta_2]  raid_vg       ewi-aor---   4.00m                                                    
  [lvol0_rmeta_3]  raid_vg       ewi-aor---   4.00m                                                    
  root             rhel_vm255-21 -wi-ao----  <3.50g                                                    
  swap             rhel_vm255-21 -wi-ao---- 512.00m                                                    



--------------------------------------

If I vgchange -an the volume group before attempting the command like https://bugzilla.redhat.com/show_bug.cgi?id=1365286#c11 shows, I get this 

"
[root@vm255-21 ~]# vgchange -an raid_vg
  0 logical volume(s) in volume group "raid_vg" now active
[root@vm255-21 ~]# lvcon^C
[root@vm255-21 ~]# lvconvert --type raid5 raid_vg/pool_tdata --stripes=5
  Using default stripesize 64.00 KiB.
  raid_vg/pool_tdata must be active to perform this operation.
"

So what *is* the correct way to reshape this raid5 thinpool? The ultimate objective is to increase the size of this raided pool so thinlv's can be lvextended and filesystems on them resized. So after adding a more Pv's in the VG, I would want to reshape it so that the new PV's are also part of the raid, and then lvextend the pool. (it seems the reshape also lvextends the pool too, and I'm not sure why it does that, note the 

"
  WARNING: Adding stripes to active and open logical volume raid_vg/pool_tdata will grow it from 228 to 285 extents!
"

Comment 2 nikhil kshirsagar 2019-12-11 04:21:31 UTC
Noticed these in /var/log/messages,


Dec 10 22:46:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds.
Dec 10 22:46:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 10 22:46:07 vm255-21 kernel: lvconvert       D ffff8dbc36c61070     0 12632  12282 0x00000084
Dec 10 22:46:07 vm255-21 kernel: Call Trace:
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a
Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146
Dec 10 22:48:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds.
Dec 10 22:48:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 10 22:48:07 vm255-21 kernel: lvconvert       D ffff8dbc36c61070     0 12632  12282 0x00000084
Dec 10 22:48:07 vm255-21 kernel: Call Trace:
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a
Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146


Dec 10 22:50:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds.
Dec 10 22:50:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 10 22:50:07 vm255-21 kernel: lvconvert       D ffff8dbc36c61070     0 12632  12282 0x00000084
Dec 10 22:50:07 vm255-21 kernel: Call Trace:
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a
Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146
Dec 10 22:52:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds.
Dec 10 22:52:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 10 22:52:07 vm255-21 kernel: lvconvert       D ffff8dbc36c61070     0 12632  12282 0x00000084
Dec 10 22:52:07 vm255-21 kernel: Call Trace:
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a
Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146



Dec 10 22:54:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds.
Dec 10 22:54:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 10 22:54:07 vm255-21 kernel: lvconvert       D ffff8dbc36c61070     0 12632  12282 0x00000084
Dec 10 22:54:07 vm255-21 kernel: Call Trace:
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a
Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146
Dec 10 22:56:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds.
Dec 10 22:56:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Dec 10 22:56:07 vm255-21 kernel: lvconvert       D ffff8dbc36c61070     0 12632  12282 0x00000084
Dec 10 22:56:07 vm255-21 kernel: Call Trace:
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod]
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146
Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a

Comment 4 Heinz Mauelshagen 2019-12-11 15:15:22 UTC
Tested locally and reproduced. Workaround is reactivation which likely requires a reboot (not tested yet).

Comment 5 Heinz Mauelshagen 2019-12-11 19:59:04 UTC
See 'dmsetup info -c' output below for suspended devices after the last reshaping lvconvert to add stripes.

root@fedora30 ~]# lvcreate ---thinpool pool -L200m t
  Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data.
  Logical volume "pool" created.

[root@fedora30 ~]# lvcreate -V1g -n t1 t/pool
<SNIP>
  Logical volume "t1" created.

[root@fedora30 ~]# mkfs -t xfs /dev/t/t1
meta-data=/dev/t/t1              isize=512    agcount=8, agsize=32768 blks
<SNIP>

# lvconvert -y --ty raid5 --stripes 3 t/pool_tdata
<SNIP>
  Logical volume t/pool_tdata successfully converted.

[root@fedora30 ~]# lvs -ao+segtype t
  LV                    VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type     
  [lvol0_pmspare]       t  ewi-------   4.00m                                                     linear   
  pool                  t  twi-aotz-- 200.00m             5.34   10.94                            thin-pool
  [pool_tdata]          t  rwi-aor--- 200.00m                                    100.00           raid1    
  [pool_tdata_rimage_0] t  iwi-aor--- 200.00m                                                     linear   
  [pool_tdata_rimage_1] t  iwi-aor--- 200.00m                                                     linear   
  [pool_tdata_rmeta_0]  t  ewi-aor---   4.00m                                                     linear   
  [pool_tdata_rmeta_1]  t  ewi-aor---   4.00m                                                     linear   
  [pool_tmeta]          t  ewi-ao----   4.00m                                                     linear   
  t1                    t  Vwi-a-tz--   1.00g pool        1.04                                    thin 

[root@fedora30 ~]# lvconvert -y --ty raid5 --stripes 3 t/pool_tdata
<SNIP>
  Logical volume t/pool_tdata successfully converted.

[root@fedora30 ~]# lvs -ao+segtype t
  LV                    VG Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Type     
  [lvol0_pmspare]       t  ewi-------   4.00m                                                     linear   
  pool                  t  twi-aotz-- 200.00m             5.34   10.94                            thin-pool
  [pool_tdata]          t  rwi-aor--- 200.00m                                    100.00           raid5    
  [pool_tdata_rimage_0] t  iwi-aor--- 200.00m                                                     linear   
  [pool_tdata_rimage_1] t  iwi-aor--- 200.00m                                                     linear   
  [pool_tdata_rmeta_0]  t  ewi-aor---   4.00m                                                     linear   
  [pool_tdata_rmeta_1]  t  ewi-aor---   4.00m                                                     linear   
  [pool_tmeta]          t  ewi-ao----   4.00m                                                     linear   
  t1                    t  Vwi-a-tz--   1.00g pool        1.04                                    thin

[root@fedora30 ~]# lvconvert -y --ty raid5 --stripes 3 t/pool_tdata
  Using default stripesize 64.00 KiB.
  WARNING: Adding stripes to active and open logical volume t/pool_tdata will grow it from 50 to 150 extents!
  Run "lvresize -l50 t/pool_tdata" to shrink it or use the additional capacity.
  Internal error: Performing unsafe table load while 12 device(s) are known to be suspended:  (254:3) 

[root@fedora30 ~]# dmsetup table
t-t1: 0 2097152 thin 254:4 1
t-pool_tdata_rmeta_3: 0 8192 linear 66:48 2048
t-pool-tpool: 0 409600 thin-pool 254:2 254:3 128 0 0 
t-pool_tdata: 0 1228800 raid raid5_ls 9 128 region_size 4096 4 254:7 254:8 254:9 254:10 254:11 254:12 254:13 254:14
t-pool_tdata_rmeta_2: 0 8192 linear 66:64 2048
t-pool_tmeta: 0 8192 linear 66:96 2048
t-pool_tdata_rimage_3: 0 417792 linear 66:48 10240
t-pool_tdata_rmeta_1: 0 8192 linear 66:80 2048
t-pool_tdata_rimage_2: 0 417792 linear 66:64 10240
t-pool_tdata_rmeta_0: 0 8192 linear 8:0 419840
t-pool_tdata_rimage_1: 0 417792 linear 66:80 10240
t-pool_tdata_rimage_0: 0 409600 linear 8:0 10240
t-pool_tdata_rimage_0: 409600 8192 linear 8:0 428032
t-pool: 0 409600 linear 254:4 0

[root@fedora30 ~]# dmsetup info -c|grep -v fedora
Name                  Maj Min Stat Open Targ Event  UUID                                                                      
t-t1                  254   6 L--w    0    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1ljwh0rCvNN9aurMxWhhlBM0FLB36uIeK      
t-pool_tdata_rmeta_3  254  13 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1fd0eQdwOulrd4WfCbuLE3yjfv3QP0NBY      
t-pool-tpool          254   4 LIsw    2    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1iBYzM9eQ2cPqTEGFU6ItF43eO38oetxp-tpool
t-pool_tdata          254   3 L-sw    1    1      1 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1fNdQu3zW23fZL834UqVxM9K2Ph8kkkTR-tdata
t-pool_tdata_rmeta_2  254  11 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1yXS75gRDPv1y988IWhdHnlcBpUwUpLNc      
t-pool_tmeta          254   2 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1YrVHgcMMJgvBaneuhhDZsQ8Y97hwnhzP-tmeta
t-pool_tdata_rimage_3 254  14 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1cMvLfDmNr6B0Iu04goO5LKAsQyvCSNnt      
t-pool_tdata_rmeta_1  254   9 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1NOCbQkFJQ7jvj1jfwDrerWJppgg94iVh      
t-pool_tdata_rimage_2 254  12 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1FaHXR2ny7EhKGvT4tup0GQt3Wcexf2Y6      
t-pool_tdata_rmeta_0  254   7 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1iWZxmIDPDM7BnzSeplSetXZlifabW2Ux      
t-pool_tdata_rimage_1 254  10 L-sw    1    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy13rlF5pP6LjoDv1c26I9KeHSzWjPlnNph      
t-pool_tdata_rimage_0 254   8 L-sw    1    2      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1hD7rdx8qHhLf0inRjENviFPFnuerVa5g      
t-pool                254   5 LIsw    0    1      0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1iBYzM9eQ2cPqTEGFU6ItF43eO38oetxp-pool

Resuming those manually leads to kernel error

[  523.032566] device-mapper: table: 254:4: dm-3 too small for target: start=0, len=1253376, dev_size=1228800
[  528.734063] device-mapper: table: 254:5: dm-4 too small for target: start=0, len=1253376, dev_size=409600

which refers to the _tdata and -tpool devices.

Comment 6 Heinz Mauelshagen 2019-12-11 20:20:41 UTC
Created attachment 1644168 [details]
lvconvert -vvvv output for the hanging conversion

Comment 36 RHEL Program Management 2023-04-29 07:28:11 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 37 Red Hat Bugzilla 2023-09-18 00:19:08 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.