Bug 2119195

Summary: extending vdo virtual volume while inactive results in failure to activate (vdo: Cannot load metadata from device)
Product: Red Hat Enterprise Linux 9 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: VDO QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: unspecified CC: agk, awalsh, heinzm, jbrassow, mcsontos, prajnoha, zkabelac
Version: 9.1Keywords: Triaged
Target Milestone: rcFlags: pm-rhel: mirror+
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.03.21-1.el9 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-11-07 08:53:27 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Corey Marthaler 2022-08-17 20:46:37 UTC
Description of problem:
[root@hayes-03 ~]# lvcreate --yes --type linear -n vdo_pool  -L 50G vdo_sanity
  Wiping vdo signature on /dev/vdo_sanity/vdo_pool.
  Logical volume "vdo_pool" created.
[root@hayes-03 ~]# lvconvert --yes --type vdo-pool -n vdo_lv  -V 100G vdo_sanity/vdo_pool
  WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formating.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
    The VDO volume can address 46 GB in 23 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.
[root@hayes-03 ~]# vgchange -an vdo_sanity
  0 logical volume(s) in volume group "vdo_sanity" now active
[root@hayes-03 ~]# lvextend --yes -L +500M vdo_sanity/vdo_lv
  Size of logical volume vdo_sanity/vdo_lv changed from 100.00 GiB (25600 extents) to <100.49 GiB (25725 extents).
  Logical volume vdo_sanity/vdo_lv successfully resized.
[root@hayes-03 ~]# vgchange -ay vdo_sanity
  device-mapper: reload ioctl on  (253:1) failed: Input/output error
  0 logical volume(s) in volume group "vdo_sanity" now active

Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: Detected version mismatch between kernel module and tools kernel: 4, tool: 2
Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: Please consider upgrading management tools to match kernel.
Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: loading device '253:1'
Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: zones: 1 logical, 1 physical, 1 hash; total threads: 12
Aug 17 15:38:48 hayes-03 kernel: kvdo11:journalQ: A logical size of 26342656 blocks was specified, but that differs from the 26214656 blocks configured in the vdo super block
Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: Could not start VDO device. (VDO error 1476, message Cannot load metadata from device)
Aug 17 15:38:48 hayes-03 kernel: kvdo11:vgchange: vdo_map_to_system_error: mapping internal status code 1476 (VDO_PARAMETER_MISMATCH: VDO Status: Parameters have conflicting values)O
Aug 17 15:38:48 hayes-03 kernel: device-mapper: table: 253:1: vdo: Cannot load metadata from device (-EIO)
Aug 17 15:38:48 hayes-03 kernel: device-mapper: ioctl: error adding target to table


Version-Release number of selected component (if applicable):
kernel-5.14.0-138.el9    BUILT: Sun Jul 31 06:20:38 AM CDT 2022

lvm2-2.03.16-3.el9    BUILT: Mon Aug  1 04:42:35 AM CDT 2022
lvm2-libs-2.03.16-3.el9    BUILT: Mon Aug  1 04:42:35 AM CDT 2022

vdo-8.2.0.2-1.el9    BUILT: Tue Jul 19 02:28:15 PM CDT 2022
kmod-kvdo-8.2.0.2-41.el9    BUILT: Thu Jul 28 05:24:49 PM CDT 2022


How reproducible:
Everytime

Comment 1 Zdenek Kabelac 2023-02-01 13:14:46 UTC
Resizing VDO/VDOPOOL with this patch https://listman.redhat.com/archives/lvm-devel/2023-January/024535.html
requires active volumes - until VDO target can support proper resize upon next activation.

Comment 3 Corey Marthaler 2023-06-07 20:11:10 UTC
Marking this Verified:Tested with the latest build, with the caveat, that this patch causes bug 2187747 now for snapshot extend attempts.

kernel-5.14.0-322.el9    BUILT: Fri Jun  2 10:00:53 AM CEST 2023
lvm2-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023
lvm2-libs-2.03.21-2.el9    BUILT: Thu May 25 12:03:04 AM CEST 2023


[root@virt-499 ~]# lvcreate --yes --type linear -n vdo_pool  -L 50G vdo_sanity
  Wiping vdo signature on /dev/vdo_sanity/vdo_pool.
  Logical volume "vdo_pool" created.
[root@virt-499 ~]# lvconvert --yes --type vdo-pool -n vdo_lv  -V 100G vdo_sanity/vdo_pool
  WARNING: Converting logical volume vdo_sanity/vdo_pool to VDO pool volume with formatting.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
    The VDO volume can address 46 GB in 23 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo_lv" created.
  Converted vdo_sanity/vdo_pool to VDO pool volume and created virtual vdo_sanity/vdo_lv VDO volume.

[root@virt-499 ~]# vgchange -an vdo_sanity
  0 logical volume(s) in volume group "vdo_sanity" now active
[root@virt-499 ~]# lvextend --yes -L +500M vdo_sanity/vdo_lv
  Cannot resize inactive VDO logical volume vdo_sanity/vdo_lv.


[root@virt-499 ~]# vgchange -ay vdo_sanity
  1 logical volume(s) in volume group "vdo_sanity" now active
[root@virt-499 ~]# lvextend --yes -L +500M vdo_sanity/vdo_lv
  Size of logical volume vdo_sanity/vdo_lv changed from 100.00 GiB (25600 extents) to <100.49 GiB (25725 extents).
  Logical volume vdo_sanity/vdo_lv successfully resized.

Comment 9 errata-xmlrpc 2023-11-07 08:53:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:6633