Description of problem: [root@hayes-03 ~]# lvcreate --yes --type linear -n vdo_pool -L 50G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/vdo_pool. Logical volume "vdo_pool" created. [root@hayes-03 ~]# lvconvert --yes --type vdo-pool -n vdo_lv -V 100G vdo_sanity/vdo_pool WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formating. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) The VDO volume can address 46 GB in 23 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. [root@hayes-03 ~]# vgchange -an vdo_sanity 0 logical volume(s) in volume group "vdo_sanity" now active [root@hayes-03 ~]# lvextend --yes -L +500M vdo_sanity/vdo_lv Size of logical volume vdo_sanity/vdo_lv changed from 100.00 GiB (25600 extents) to <100.49 GiB (25725 extents). Logical volume vdo_sanity/vdo_lv successfully resized. [root@hayes-03 ~]# vgchange -ay vdo_sanity device-mapper: reload ioctl on (253:1) failed: Input/output error 0 logical volume(s) in volume group "vdo_sanity" now active Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: Detected version mismatch between kernel module and tools kernel: 4, tool: 2 Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: Please consider upgrading management tools to match kernel. Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: loading device '253:1' Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: zones: 1 logical, 1 physical, 1 hash; total threads: 12 Aug 17 15:38:48 hayes-03 kernel: kvdo11:journalQ: A logical size of 26342656 blocks was specified, but that differs from the 26214656 blocks configured in the vdo super block Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: Could not start VDO device. (VDO error 1476, message Cannot load metadata from device) Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: vdo_map_to_system_error: mapping internal status code 1476 (VDO_PARAMETER_MISMATCH: VDO Status: Parameters have conflicting values)O Aug 17 15:38:48 hayes-03 kernel: device-mapper: table: 253:1: vdo: Cannot load metadata from device (-EIO) Aug 17 15:38:48 hayes-03 kernel: device-mapper: ioctl: error adding target to table Version-Release number of selected component (if applicable): kernel-5.14.0-138.el9 BUILT: Sun Jul 31 06:20:38 AM CDT 2022 lvm2-2.03.16-3.el9 BUILT: Mon Aug 1 04:42:35 AM CDT 2022 lvm2-libs-2.03.16-3.el9 BUILT: Mon Aug 1 04:42:35 AM CDT 2022 vdo-8.2.0.2-1.el9 BUILT: Tue Jul 19 02:28:15 PM CDT 2022 kmod-kvdo-8.2.0.2-41.el9 BUILT: Thu Jul 28 05:24:49 PM CDT 2022 How reproducible: Everytime
Resizing VDO/VDOPOOL with this patch https://listman.redhat.com/archives/lvm-devel/2023-January/024535.html requires active volumes - until VDO target can support proper resize upon next activation.
Marking this Verified:Tested with the latest build, with the caveat, that this patch causes bug 2187747 now for snapshot extend attempts. kernel-5.14.0-322.el9 BUILT: Fri Jun 2 10:00:53 AM CEST 2023 lvm2-2.03.21-2.el9 BUILT: Thu May 25 12:03:04 AM CEST 2023 lvm2-libs-2.03.21-2.el9 BUILT: Thu May 25 12:03:04 AM CEST 2023 [root@virt-499 ~]# lvcreate --yes --type linear -n vdo_pool -L 50G vdo_sanity Wiping vdo signature on /dev/vdo_sanity/vdo_pool. Logical volume "vdo_pool" created. [root@virt-499 ~]# lvconvert --yes --type vdo-pool -n vdo_lv -V 100G vdo_sanity/vdo_pool WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formatting. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) The VDO volume can address 46 GB in 23 data slabs, each 2 GB. It can grow to address at most 16 TB of physical storage in 8192 slabs. If a larger maximum size might be needed, use bigger slabs. Logical volume "vdo_lv" created. Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume. [root@virt-499 ~]# vgchange -an vdo_sanity 0 logical volume(s) in volume group "vdo_sanity" now active [root@virt-499 ~]# lvextend --yes -L +500M vdo_sanity/vdo_lv Cannot resize inactive VDO logical volume vdo_sanity/vdo_lv. [root@virt-499 ~]# vgchange -ay vdo_sanity 1 logical volume(s) in volume group "vdo_sanity" now active [root@virt-499 ~]# lvextend --yes -L +500M vdo_sanity/vdo_lv Size of logical volume vdo_sanity/vdo_lv changed from 100.00 GiB (25600 extents) to <100.49 GiB (25725 extents). Logical volume vdo_sanity/vdo_lv successfully resized.