Bug 1782045
Summary: | reshape of a raid5 thinpool results in a hung lvconvert with an error " Internal error: Performing unsafe table load while XX device(s) are known to be suspended" | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | nikhil kshirsagar <nkshirsa> | ||||
Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> | ||||
lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> | ||||
Status: | CLOSED WONTFIX | Docs Contact: | |||||
Severity: | urgent | ||||||
Priority: | high | CC: | agk, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, rhandlin, zkabelac | ||||
Version: | --- | Keywords: | Triaged | ||||
Target Milestone: | rc | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2023-04-29 07:28:11 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1785670 | ||||||
Bug Blocks: | 1439399 | ||||||
Attachments: |
|
Description
nikhil kshirsagar
2019-12-11 04:03:54 UTC
Noticed these in /var/log/messages, Dec 10 22:46:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds. Dec 10 22:46:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Dec 10 22:46:07 vm255-21 kernel: lvconvert D ffff8dbc36c61070 0 12632 12282 0x00000084 Dec 10 22:46:07 vm255-21 kernel: Call Trace: Dec 10 22:46:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod] Dec 10 22:46:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146 Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a Dec 10 22:46:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146 Dec 10 22:48:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds. Dec 10 22:48:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Dec 10 22:48:07 vm255-21 kernel: lvconvert D ffff8dbc36c61070 0 12632 12282 0x00000084 Dec 10 22:48:07 vm255-21 kernel: Call Trace: Dec 10 22:48:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod] Dec 10 22:48:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146 Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a Dec 10 22:48:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146 Dec 10 22:50:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds. Dec 10 22:50:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Dec 10 22:50:07 vm255-21 kernel: lvconvert D ffff8dbc36c61070 0 12632 12282 0x00000084 Dec 10 22:50:07 vm255-21 kernel: Call Trace: Dec 10 22:50:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod] Dec 10 22:50:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146 Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a Dec 10 22:50:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146 Dec 10 22:52:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds. Dec 10 22:52:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Dec 10 22:52:07 vm255-21 kernel: lvconvert D ffff8dbc36c61070 0 12632 12282 0x00000084 Dec 10 22:52:07 vm255-21 kernel: Call Trace: Dec 10 22:52:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod] Dec 10 22:52:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146 Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a Dec 10 22:52:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146 Dec 10 22:54:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds. Dec 10 22:54:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Dec 10 22:54:07 vm255-21 kernel: lvconvert D ffff8dbc36c61070 0 12632 12282 0x00000084 Dec 10 22:54:07 vm255-21 kernel: Call Trace: Dec 10 22:54:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod] Dec 10 22:54:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146 Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a Dec 10 22:54:07 vm255-21 kernel: [<ffffffffadf8de21>] ? system_call_after_swapgs+0xae/0x146 Dec 10 22:56:07 vm255-21 kernel: INFO: task lvconvert:12632 blocked for more than 120 seconds. Dec 10 22:56:07 vm255-21 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Dec 10 22:56:07 vm255-21 kernel: lvconvert D ffff8dbc36c61070 0 12632 12282 0x00000084 Dec 10 22:56:07 vm255-21 kernel: Call Trace: Dec 10 22:56:07 vm255-21 kernel: [<ffffffffad8bc174>] ? __queue_work+0x144/0x3f0 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf80a09>] schedule+0x29/0x70 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf7e511>] schedule_timeout+0x221/0x2d0 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0072ef2>] ? dm_make_request+0x172/0x1a0 [dm_mod] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadb50d27>] ? generic_make_request+0x147/0x380 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf80dbd>] wait_for_completion+0xfd/0x140 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffad8db4c0>] ? wake_up_state+0x20/0x20 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffada86f4d>] submit_bio_wait+0x6d/0x90 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadda0bd5>] sync_page_io+0x75/0x100 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc04479b8>] read_disk_sb+0x38/0x80 [dm_raid] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc04493f4>] raid_ctr+0x744/0x17f0 [dm_raid] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0075e4d>] dm_table_add_target+0x17d/0x440 [dm_mod] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0079d97>] table_load+0x157/0x390 [dm_mod] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc007b22e>] ctl_ioctl+0x24e/0x550 [dm_mod] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc0079c40>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffc007b53e>] dm_ctl_ioctl+0xe/0x20 [dm_mod] Dec 10 22:56:07 vm255-21 kernel: [<ffffffffada5fb40>] do_vfs_ioctl+0x3a0/0x5a0 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffada5fde1>] SyS_ioctl+0xa1/0xc0 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf8de15>] ? system_call_after_swapgs+0xa2/0x146 Dec 10 22:56:07 vm255-21 kernel: [<ffffffffadf8dede>] system_call_fastpath+0x25/0x2a Tested locally and reproduced. Workaround is reactivation which likely requires a reboot (not tested yet). See 'dmsetup info -c' output below for suspended devices after the last reshaping lvconvert to add stripes. root@fedora30 ~]# lvcreate ---thinpool pool -L200m t Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. Logical volume "pool" created. [root@fedora30 ~]# lvcreate -V1g -n t1 t/pool <SNIP> Logical volume "t1" created. [root@fedora30 ~]# mkfs -t xfs /dev/t/t1 meta-data=/dev/t/t1 isize=512 agcount=8, agsize=32768 blks <SNIP> # lvconvert -y --ty raid5 --stripes 3 t/pool_tdata <SNIP> Logical volume t/pool_tdata successfully converted. [root@fedora30 ~]# lvs -ao+segtype t LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Type [lvol0_pmspare] t ewi------- 4.00m linear pool t twi-aotz-- 200.00m 5.34 10.94 thin-pool [pool_tdata] t rwi-aor--- 200.00m 100.00 raid1 [pool_tdata_rimage_0] t iwi-aor--- 200.00m linear [pool_tdata_rimage_1] t iwi-aor--- 200.00m linear [pool_tdata_rmeta_0] t ewi-aor--- 4.00m linear [pool_tdata_rmeta_1] t ewi-aor--- 4.00m linear [pool_tmeta] t ewi-ao---- 4.00m linear t1 t Vwi-a-tz-- 1.00g pool 1.04 thin [root@fedora30 ~]# lvconvert -y --ty raid5 --stripes 3 t/pool_tdata <SNIP> Logical volume t/pool_tdata successfully converted. [root@fedora30 ~]# lvs -ao+segtype t LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Type [lvol0_pmspare] t ewi------- 4.00m linear pool t twi-aotz-- 200.00m 5.34 10.94 thin-pool [pool_tdata] t rwi-aor--- 200.00m 100.00 raid5 [pool_tdata_rimage_0] t iwi-aor--- 200.00m linear [pool_tdata_rimage_1] t iwi-aor--- 200.00m linear [pool_tdata_rmeta_0] t ewi-aor--- 4.00m linear [pool_tdata_rmeta_1] t ewi-aor--- 4.00m linear [pool_tmeta] t ewi-ao---- 4.00m linear t1 t Vwi-a-tz-- 1.00g pool 1.04 thin [root@fedora30 ~]# lvconvert -y --ty raid5 --stripes 3 t/pool_tdata Using default stripesize 64.00 KiB. WARNING: Adding stripes to active and open logical volume t/pool_tdata will grow it from 50 to 150 extents! Run "lvresize -l50 t/pool_tdata" to shrink it or use the additional capacity. Internal error: Performing unsafe table load while 12 device(s) are known to be suspended: (254:3) [root@fedora30 ~]# dmsetup table t-t1: 0 2097152 thin 254:4 1 t-pool_tdata_rmeta_3: 0 8192 linear 66:48 2048 t-pool-tpool: 0 409600 thin-pool 254:2 254:3 128 0 0 t-pool_tdata: 0 1228800 raid raid5_ls 9 128 region_size 4096 4 254:7 254:8 254:9 254:10 254:11 254:12 254:13 254:14 t-pool_tdata_rmeta_2: 0 8192 linear 66:64 2048 t-pool_tmeta: 0 8192 linear 66:96 2048 t-pool_tdata_rimage_3: 0 417792 linear 66:48 10240 t-pool_tdata_rmeta_1: 0 8192 linear 66:80 2048 t-pool_tdata_rimage_2: 0 417792 linear 66:64 10240 t-pool_tdata_rmeta_0: 0 8192 linear 8:0 419840 t-pool_tdata_rimage_1: 0 417792 linear 66:80 10240 t-pool_tdata_rimage_0: 0 409600 linear 8:0 10240 t-pool_tdata_rimage_0: 409600 8192 linear 8:0 428032 t-pool: 0 409600 linear 254:4 0 [root@fedora30 ~]# dmsetup info -c|grep -v fedora Name Maj Min Stat Open Targ Event UUID t-t1 254 6 L--w 0 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1ljwh0rCvNN9aurMxWhhlBM0FLB36uIeK t-pool_tdata_rmeta_3 254 13 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1fd0eQdwOulrd4WfCbuLE3yjfv3QP0NBY t-pool-tpool 254 4 LIsw 2 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1iBYzM9eQ2cPqTEGFU6ItF43eO38oetxp-tpool t-pool_tdata 254 3 L-sw 1 1 1 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1fNdQu3zW23fZL834UqVxM9K2Ph8kkkTR-tdata t-pool_tdata_rmeta_2 254 11 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1yXS75gRDPv1y988IWhdHnlcBpUwUpLNc t-pool_tmeta 254 2 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1YrVHgcMMJgvBaneuhhDZsQ8Y97hwnhzP-tmeta t-pool_tdata_rimage_3 254 14 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1cMvLfDmNr6B0Iu04goO5LKAsQyvCSNnt t-pool_tdata_rmeta_1 254 9 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1NOCbQkFJQ7jvj1jfwDrerWJppgg94iVh t-pool_tdata_rimage_2 254 12 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1FaHXR2ny7EhKGvT4tup0GQt3Wcexf2Y6 t-pool_tdata_rmeta_0 254 7 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1iWZxmIDPDM7BnzSeplSetXZlifabW2Ux t-pool_tdata_rimage_1 254 10 L-sw 1 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy13rlF5pP6LjoDv1c26I9KeHSzWjPlnNph t-pool_tdata_rimage_0 254 8 L-sw 1 2 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1hD7rdx8qHhLf0inRjENviFPFnuerVa5g t-pool 254 5 LIsw 0 1 0 LVM-epsvfAXgZvzmleM94A4tcsTMIBdLPyy1iBYzM9eQ2cPqTEGFU6ItF43eO38oetxp-pool Resuming those manually leads to kernel error [ 523.032566] device-mapper: table: 254:4: dm-3 too small for target: start=0, len=1253376, dev_size=1228800 [ 528.734063] device-mapper: table: 254:5: dm-4 too small for target: start=0, len=1253376, dev_size=409600 which refers to the _tdata and -tpool devices. Created attachment 1644168 [details]
lvconvert -vvvv output for the hanging conversion
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release. Therefore, it is being closed. If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened. The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |