Bug 1784695
| Summary: | Do not allow reshape of a raid5 thinpool | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | nikhil kshirsagar <nkshirsa> | |
| Component: | lvm2 | Assignee: | Heinz Mauelshagen <heinzm> | |
| lvm2 sub component: | Mirroring and RAID | QA Contact: | cluster-qe <cluster-qe> | |
| Status: | CLOSED ERRATA | Docs Contact: | ||
| Severity: | medium | |||
| Priority: | urgent | CC: | agk, cmarthal, heinzm, jbrassow, lmiksik, mcsontos, msnitzer, prajnoha, rhandlin, zkabelac | |
| Version: | 7.7 | |||
| Target Milestone: | rc | |||
| Target Release: | --- | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | lvm2-2.02.186-7.el7 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1796958 (view as bug list) | Environment: | ||
| Last Closed: | 2020-03-31 20:04:51 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1796958 | |||
|
Description
nikhil kshirsagar
2019-12-18 06:05:29 UTC
Blocked (until correct solution is found) by upstream commit: https://www.redhat.com/archives/lvm-devel/2020-January/msg00008.html This turned out to be more restrictive than needed. Being worked on. As followup for comment 6 - restriction has been fuhrer limited by this patch (on stable branch): https://www.redhat.com/archives/lvm-devel/2020-January/msg00032.html The very scenario given in comment #0 appears unchanged with the latest rpms. Am I missing something? Please post devel unit test results. 3.10.0-1124.el7.x86_64 BUILT: Thu 23 Jan 2020 10:09:44 AM CST lvm2-2.02.186-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 lvm2-libs-2.02.186-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 lvm2-cluster-2.02.186-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 lvm2-lockd-2.02.186-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 lvm2-python-boom-0.9-24.el7 BUILT: Fri Jan 31 12:27:55 CST 2020 cmirror-2.02.186-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 device-mapper-1.02.164-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 device-mapper-libs-1.02.164-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 device-mapper-event-1.02.164-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 device-mapper-event-libs-1.02.164-6.el7 BUILT: Fri Jan 31 12:26:22 CST 2020 [root@hayes-03 ~]# vgcreate raid_vg /dev/sd[bcdefgh]1 Volume group "raid_vg" successfully created [root@hayes-03 ~]# lvcreate -n pool --type raid5 --stripes=4 -L900M raid_vg Using default stripesize 64.00 KiB. Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents). Logical volume "pool" created. [root@hayes-03 ~]# lvcreate -n poolmeta -L10M raid_vg Rounding up size to full physical extent 12.00 MiB Logical volume "poolmeta" created. [root@hayes-03 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Devices pool raid_vg rwi-a-r--- 912.00m 100.00 pool_rimage_0(0),pool_rimage_1(0),pool_rimage_2(0),pool_rimage_3(0),pool_rimage_4(0) [pool_rimage_0] raid_vg iwi-aor--- 228.00m /dev/sdb1(1) [pool_rimage_1] raid_vg iwi-aor--- 228.00m /dev/sdc1(1) [pool_rimage_2] raid_vg iwi-aor--- 228.00m /dev/sdd1(1) [pool_rimage_3] raid_vg iwi-aor--- 228.00m /dev/sde1(1) [pool_rimage_4] raid_vg iwi-aor--- 228.00m /dev/sdf1(1) [pool_rmeta_0] raid_vg ewi-aor--- 4.00m /dev/sdb1(0) [pool_rmeta_1] raid_vg ewi-aor--- 4.00m /dev/sdc1(0) [pool_rmeta_2] raid_vg ewi-aor--- 4.00m /dev/sdd1(0) [pool_rmeta_3] raid_vg ewi-aor--- 4.00m /dev/sde1(0) [pool_rmeta_4] raid_vg ewi-aor--- 4.00m /dev/sdf1(0) poolmeta raid_vg -wi-a----- 12.00m /dev/sdb1(58) [root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool --poolmetadata poolmeta Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. WARNING: Converting raid_vg/pool and raid_vg/poolmeta to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert raid_vg/pool and raid_vg/poolmeta? [y/n]: y Converted raid_vg/pool and raid_vg/poolmeta to thin pool. [root@hayes-03 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Devices [lvol0_pmspare] raid_vg ewi------- 12.00m /dev/sdb1(61) pool raid_vg twi-a-tz-- 912.00m 0.00 10.29 pool_tdata(0) [pool_tdata] raid_vg rwi-aor--- 912.00m 100.00 pool_tdata_rimage_0(0),pool_tdata_rimage_1(0),pool_tdata_rimage_2(0),pool_tdata_rimage_3(0),pool_tdata_rimage_4(0) [pool_tdata_rimage_0] raid_vg iwi-aor--- 228.00m /dev/sdb1(1) [pool_tdata_rimage_1] raid_vg iwi-aor--- 228.00m /dev/sdc1(1) [pool_tdata_rimage_2] raid_vg iwi-aor--- 228.00m /dev/sdd1(1) [pool_tdata_rimage_3] raid_vg iwi-aor--- 228.00m /dev/sde1(1) [pool_tdata_rimage_4] raid_vg iwi-aor--- 228.00m /dev/sdf1(1) [pool_tdata_rmeta_0] raid_vg ewi-aor--- 4.00m /dev/sdb1(0) [pool_tdata_rmeta_1] raid_vg ewi-aor--- 4.00m /dev/sdc1(0) [pool_tdata_rmeta_2] raid_vg ewi-aor--- 4.00m /dev/sdd1(0) [pool_tdata_rmeta_3] raid_vg ewi-aor--- 4.00m /dev/sde1(0) [pool_tdata_rmeta_4] raid_vg ewi-aor--- 4.00m /dev/sdf1(0) [pool_tmeta] raid_vg ewi-ao---- 12.00m /dev/sdb1(58) [root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_tdata --stripes=5 Using default stripesize 64.00 KiB. WARNING: Adding stripes to active and open logical volume raid_vg/pool_tdata will grow it from 228 to 285 extents! Run "lvresize -l228 raid_vg/pool_tdata" to shrink it or use the additional capacity. Are you sure you want to add 1 images to raid5 LV raid_vg/pool_tdata? [y/n]: y Internal error: Performing unsafe table load while 15 device(s) are known to be suspended: (253:11) [Deadlock] Feb 4 10:24:50 hayes-03 kernel: INFO: task lvconvert:21636 blocked for more than 120 seconds. Feb 4 10:24:50 hayes-03 kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Feb 4 10:24:50 hayes-03 kernel: lvconvert D ffff98febcaac1c0 0 21636 2354 0x00000080 Feb 4 10:24:50 hayes-03 kernel: Call Trace: Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa92bc60c>] ? __queue_work+0x13c/0x3f0 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa9985d89>] schedule+0x29/0x70 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa9983891>] schedule_timeout+0x221/0x2d0 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc0376e92>] ? dm_make_request+0x172/0x1a0 [dm_mod] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa9554437>] ? generic_make_request+0x147/0x380 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa998613d>] wait_for_completion+0xfd/0x140 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa92db990>] ? wake_up_state+0x20/0x20 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa948a38d>] submit_bio_wait+0x6d/0x90 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa97a5205>] sync_page_io+0x75/0x100 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc079e9b8>] read_disk_sb+0x38/0x80 [dm_raid] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc07a03f4>] raid_ctr+0x744/0x17f0 [dm_raid] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc0379ded>] dm_table_add_target+0x17d/0x440 [dm_mod] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc037dd37>] table_load+0x157/0x390 [dm_mod] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc037f1cb>] ctl_ioctl+0x24b/0x640 [dm_mod] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc037dbe0>] ? retrieve_status+0x1c0/0x1c0 [dm_mod] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffc037f5ce>] dm_ctl_ioctl+0xe/0x20 [dm_mod] Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa94628a0>] do_vfs_ioctl+0x3a0/0x5b0 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa9462b51>] SyS_ioctl+0xa1/0xc0 Feb 4 10:24:50 hayes-03 kernel: [<ffffffffa9992ed2>] system_call_fastpath+0x25/0x2a Ok - we were trapping change like raid1 -> raid5 and similar. But patch from comment 10 has not catched raid5 stripeX -> stripeY. It will need another small blocking patch. Fixed by commit 253d10f840682f85dad0e4c29f55ff50f94792fa on stable-2.02 branch. Basic scenario in comment #0 now properly disallows stacked reshape in the latest rpms. Continuing with additional conversion/reshape testing... lvm2-2.02.186-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 lvm2-libs-2.02.186-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 lvm2-cluster-2.02.186-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 lvm2-lockd-2.02.186-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-libs-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-event-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-event-libs-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 [root@hayes-03 ~]# vgcreate raid_vg /dev/sd[bcdefgh]1 Volume group "raid_vg" successfully created [root@hayes-03 ~]# lvcreate -n pool --type raid5 --stripes=4 -L900M raid_vg Using default stripesize 64.00 KiB. Rounding size 900.00 MiB (225 extents) up to stripe boundary size 912.00 MiB(228 extents). Logical volume "pool" created. [root@hayes-03 ~]# lvcreate -n poolmeta -L10M raid_vg Rounding up size to full physical extent 12.00 MiB Logical volume "poolmeta" created. [root@hayes-03 ~]# lvconvert --thinpool raid_vg/pool --poolmetadata poolmeta Thin pool volume with chunk size 64.00 KiB can address at most 15.81 TiB of data. WARNING: Converting raid_vg/pool and raid_vg/poolmeta to thin pool's data and metadata volumes with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert raid_vg/pool and raid_vg/poolmeta? [y/n]: y Converted raid_vg/pool and raid_vg/poolmeta to thin pool. [root@hayes-03 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices [lvol0_pmspare] raid_vg ewi------- 12.00m /dev/sdb1(61) pool raid_vg twi-a-tz-- 912.00m 0.00 10.29 pool_tdata(0) [pool_tdata] raid_vg rwi-aor--- 912.00m 100.00 pool_tdata_rimage_0(0),pool_tdata_rimage_1(0),pool_tdata_rimage_2(0),pool_tdata_rimage_3(0),pool_tdata_rimage_4(0) [pool_tdata_rimage_0] raid_vg iwi-aor--- 228.00m /dev/sdb1(1) [pool_tdata_rimage_1] raid_vg iwi-aor--- 228.00m /dev/sdc1(1) [pool_tdata_rimage_2] raid_vg iwi-aor--- 228.00m /dev/sdd1(1) [pool_tdata_rimage_3] raid_vg iwi-aor--- 228.00m /dev/sde1(1) [pool_tdata_rimage_4] raid_vg iwi-aor--- 228.00m /dev/sdf1(1) [pool_tdata_rmeta_0] raid_vg ewi-aor--- 4.00m /dev/sdb1(0) [pool_tdata_rmeta_1] raid_vg ewi-aor--- 4.00m /dev/sdc1(0) [pool_tdata_rmeta_2] raid_vg ewi-aor--- 4.00m /dev/sdd1(0) [pool_tdata_rmeta_3] raid_vg ewi-aor--- 4.00m /dev/sde1(0) [pool_tdata_rmeta_4] raid_vg ewi-aor--- 4.00m /dev/sdf1(0) [pool_tmeta] raid_vg ewi-ao---- 12.00m /dev/sdb1(58) [root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_tdata --stripes=5 Using default stripesize 64.00 KiB. Unable to convert stacked volume raid_vg/pool_tdata. Reshape request failed on LV raid_vg/pool_tdata. Marking this bug verified in the latest build. 3.10.0-1126.1.el7.x86_64 lvm2-2.02.186-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 lvm2-libs-2.02.186-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-libs-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-event-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 device-mapper-event-libs-1.02.164-7.el7 BUILT: Mon Feb 10 09:04:11 CST 2020 Normal non stacked takerover/reshape regression testing passes. Stacked reshape attempts are no longer allowed. That said a few stacked "takeover" operations are still allowed, namely raid1 and raid10. # Takeover operations: [root@hayes-03 ~]# lvs -o +segtype LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Type pool_r1 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool pool_r10 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool pool_r4 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool pool_r6 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool # raid1 -> raid5 still works [root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r1_tdata --stripes=5 --yes Using default stripesize 64.00 KiB. --stripes not allowed for LV raid_vg/pool_r1_tdata when converting from raid1 to raid5. Logical volume raid_vg/pool_r1_tdata successfully converted. # "invalid" [root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r4_tdata --stripes=5 --yes Using default stripesize 64.00 KiB. Replaced LV type raid5 (same as raid5_ls) with possible type raid5_n. Repeat this command to convert to raid5 after an interim conversion has finished. Invalid conversion request on raid_vg/pool_r4_tdata. # "invalid" [root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r6_tdata --stripes=5 --yes Using default stripesize 64.00 KiB. Replaced LV type raid5 (same as raid5_ls) with possible type raid6_ls_6. Repeat this command to convert to raid5 after an interim conversion has finished. Invalid conversion request on raid_vg/pool_r6_tdata. # raid10 -> raid5 interim still works. [root@hayes-03 ~]# lvconvert --type raid5 raid_vg/pool_r10_tdata --stripes=5 --yes Using default stripesize 64.00 KiB. Replaced LV type raid5 (same as raid5_ls) with possible type raid0_meta. Repeat this command to convert to raid5 after an interim conversion has finished. WARNING: ignoring --stripes option on takeover of raid_vg/pool_r10_tdata (reshape afterwards). Logical volume raid_vg/pool_r10_tdata successfully converted. # Reshape operations: [root@hayes-03 ~]# lvs -o +segtype LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Type pool_r1 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool pool_r10 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool pool_r4 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool pool_r6 raid_vg twi-a-tz-- 900.00m 0.00 10.29 thin-pool # All reshape operations no fail [root@hayes-03 ~]# lvconvert --type raid4 raid_vg/pool_r4_tdata --stripes=5 --yes Using default stripesize 64.00 KiB. Unable to convert stacked volume raid_vg/pool_r4_tdata. Reshape request failed on LV raid_vg/pool_r4_tdata. [root@hayes-03 ~]# lvconvert --type raid6 raid_vg/pool_r6_tdata --stripes=5 --yes Using default stripesize 64.00 KiB. Unable to convert stacked volume raid_vg/pool_r6_tdata. Reshape request failed on LV raid_vg/pool_r6_tdata. [root@hayes-03 ~]# lvconvert --type raid10 raid_vg/pool_r10_tdata --stripes=5 --yes Using default stripesize 64.00 KiB. Unable to convert stacked volume raid_vg/pool_r10_tdata. Reshape request failed on LV raid_vg/pool_r10_tdata. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1129 |