RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1582647 - VDO volume is unable to growPhysical after device resize performed in previous boot
Summary: VDO volume is unable to growPhysical after device resize performed in previou...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kmod-kvdo
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Matthew Sakai
QA Contact: Jakub Krysl
URL:
Whiteboard:
Depends On:
Blocks: 1591180
TreeView+ depends on / blocked
 
Reported: 2018-05-25 20:10 UTC by Bryan Gurney
Modified: 2021-09-09 14:15 UTC (History)
9 users (show)

Fixed In Version: 6.1.1.91
Doc Type: If docs needed, set a value
Doc Text:
Previously, VDO volumes were unable to grow if the underlying block device increased in size while the system was powered off or the VDO volume was offline. With this update, VDO correctly checks the requested size and the current size of the device, and, as a result, the described problem no longer occurs.
Clone Of:
: 1591180 (view as bug list)
Environment:
Last Closed: 2018-10-30 09:39:31 UTC
Target Upstream Version:
Embargoed:
msakai: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:3094 0 None None None 2018-10-30 09:40:09 UTC

Description Bryan Gurney 2018-05-25 20:10:16 UTC
Description of problem:
If a system with a VDO volume is shut down, has its block device increased in size, powers on, and executes "vdo growPhysical" on the VDO volume, the growPhysical operation reports "Requested physical block count 13107200 not greater than 13107200", where the number reported in both values is the newer, larger size of the block device, and not the previous physical size of the backing device of the VDO volume.  Therefore, the VDO volume is unable to grow.

Version-Release number of selected component (if applicable):
Found on kernel 3.10.0-855.el7.x86_64 with kmod-kvdo-6.1.0.149-13.el7.x86_64, but is also visible in kernel 3.10.0-862.el7.x86_64 and kmod-kvdo-153-15.el7.x86_64.

How reproducible:
100%

Steps to Reproduce:
1. Create a VDO volume on a KVM virtual machine running RHEL 7.5, on a spare virtual disk (I chose an IDE disk that appears as /dev/sdb, with a size of 40 GB).

vdo create --name=vdo1 --device=/dev/sdb

2. Shut down the virtual machine.
3. On the hypervisor, resize the qcow2 file of the test device to 50 gigabytes:

sudo qemu-img resize /mnt/vmstore/testvms/rhel75test_sdb.qcow2 50G

4. Power on the virtual machine.

5. After verifying that the test device is larger, try to growPhysical by executing "vdo growPhysical --name=vdo1"

Actual results:
# date; vdo growPhysical --name=vdo1 --verbose
Fri May 25 16:00:29 EDT 2018
    vdodumpconfig /dev/sdb
    dmsetup status vdo1
    dmsetup resume vdo1
    dmsetup status vdo1
    dmsetup message vdo1 0 prepareToGrowPhysical
    dmsetup suspend vdo1
    dmsetup message vdo1 0 growPhysical
vdo: ERROR - Cannot grow physical on VDO vdo1; device-mapper: message ioctl on vdo1  failed: Invalid argument
    dmsetup resume vdo1
vdo: ERROR - device-mapper: message ioctl on vdo1  failed: Invalid argument

May 25 16:00:29 rhel75test kernel: kvdo0:dmsetup: Preparing to resize physical to 13107200
May 25 16:00:29 rhel75test kernel: kvdo0:dmsetup: Done preparing to resize physical
May 25 16:00:29 rhel75test kernel: kvdo0:dmsetup: suspending device 'vdo1'
May 25 16:00:29 rhel75test kernel: kvdo0:dmsetup: device 'vdo1' suspended
May 25 16:00:29 rhel75test kernel: kvdo0:dmsetup: Requested physical block count 13107200 not greater than 13107200
May 25 16:00:29 rhel75test vdo: ERROR - Cannot grow physical on VDO vdo1; device-mapper: message ioctl on vdo1  failed: Invalid argument
May 25 16:00:29 rhel75test kernel: kvdo0:dmsetup: resuming device 'vdo1'
May 25 16:00:29 rhel75test kernel: kvdo0:dmsetup: device 'vdo1' resumed
May 25 16:00:29 rhel75test vdo: ERROR - device-mapper: message ioctl on vdo1  failed: Invalid argument

However, according to "vdodumpconfig", the VDO volume has a physical block count of 10485760 (40 GB), not 13107200 (50 GB):

# vdodumpconfig /dev/sdb
VDOConfig:
  blockSize: 4096
  logicalBlocks: 9418898
  physicalBlocks: 10485760
  slabSize: 524288
  recoveryJournalSize: 32768
  slabJournalBlocks: 224
UUID: 001a8f84-703f-4d24-9866-294c50f562a8

Expected results:
The "growPhysical" command succeeds in growing the VDO volume.

Additional info:

Comment 2 Bryan Gurney 2018-05-25 20:51:47 UTC
This is also reproducible if, instead of shutdown/resize/reboot, you perform the following steps on a VDO volume created on a partition with free space:

1. Unmount any filesystems, etc. using the VDO volume.
2. Execute "vdo stop" on the VDO volume.
3. Resize the partition to a slightly larger size (i.e.: 40 GiB to 50 GiB)

# parted /dev/sdf resizePart 1 50GiB

4. Execute "vdo start" on the VDO volume.
5. Execute "vdo growPhysical" on the VDO volume.

In my example of a grow attempt from 40 GB to 50 GB, this appears:

kernel: kvdo43:dmsetup: Requested physical block count 13106944 not greater than 13106944

However, this opens up a potential workaround: try growing the partition again, while the VDO volume is online:

6. With the VDO volume online, resize the partition to an even larger size (i.e.: 50 GiB to 60 GiB)

# parted /dev/sdf resizePart 1 60GiB

7. Execute "vdo growphysical" on the VDO volume.

kernel: kvdo43:dmsetup: Physical block count was 13106944, now 15728384

At this point, the "old" block count is 50 GB (while the config is actually 40 GB), but the growPhysical size request is 60 GB, so the growPhysical action proceeds.

Comment 5 Jakub Krysl 2018-07-03 08:33:21 UTC
Tested on:
RHEL-7.6-20180626.0
kernel-3.10.0-915.el7
kmod-vdo-6.1.1.99-1.el7
vdo-6.1.1.99-2.el7

# vdo status --name vdo | grep 'Physical size'
    Physical size: 39062500K
# vdo growPhysical --name vdo
vdo: ERROR - Cannot prepare to grow physical on VDO vdo; device-mapper: message ioctl on vdo  failed: Invalid argument
vdo: ERROR - device-mapper: message ioctl on vdo  failed: Invalid argument
# vdo status --name vdo | grep 'Physical size'
    Physical size: 39062500K
# vdo stop --name vdo
Stopping VDO vdo
# parted /dev/sdb resizePart 1 50GiB
Information: You may need to update /etc/fstab.
# mount -a
# vdo status --name vdo | grep 'Physical size'
    Physical size: 39062500K
# vdo start --name vdo
Starting VDO vdo
Starting compression on VDO vdo
VDO instance 81 volume is ready at /dev/mapper/vdo
# vdo growPhysical --name vdo
# vdo status --name vdo | grep 'Physical size'
    Physical size: 50G

/var/log/messages:
[70275.709844] kvdo82:dmsetup: starting device 'vdo'
[70275.735504] kvdo82:dmsetup: underlying device, REQ_FLUSH: not supported, REQ_FUA: not supported
[70275.778142] kvdo82:dmsetup: Using write policy sync automatically.
[70275.807234] kvdo82:dmsetup: zones: 1 logical, 1 physical, 1 hash; base threads: 5
[70275.901170] kvdo82:journalQ: VDO commencing normal operation
[70275.927902] kvdo82:dmsetup: uds: kvdo82:dedupeQ: creating index: dev=/dev/disk/by-id/scsi-360fff19abdd9b56dfb9bf59625f4c9f7-part1 offset=4096 size=2781704192
uds: kvdo82:dedupeQ: Using 6 indexing zones for concurrency.

[70276.019412] Setting UDS index target state to online
[70276.043640] kvdo82:dmsetup: device 'vdo' started
[70276.065537] kvdo82:dmsetup: resuming device 'vdo'
[70276.087439] kvdo82:dmsetup: device 'vdo' resumed
[70276.129687] kvdo82:packerQ: compression is enabled
[70283.637074] kvdo82:dmsetup: Preparing to resize physical to 9765625
[70291.662667] kvdo82:dmsetup: suspending device 'vdo'
[70291.687584] kvdo82:dmsetup: device 'vdo' suspended
[70291.711892] kvdo82:dmsetup: stopping device 'vdo'
[70291.736280] kvdo82:dmsetup: uds: kvdo82:dedupeQ: index_0: beginning save (vcn 4294967295)

[70291.774365] Setting UDS index target state to closed
[70291.953167] kvdo82:dmsetup: device 'vdo' stopped
[70324.259813] kvdo83:dmsetup: starting device 'vdo'
[70324.284188] kvdo83:dmsetup: underlying device, REQ_FLUSH: not supported, REQ_FUA: not supported
[70324.325991] kvdo83:dmsetup: Using write policy sync automatically.
[70324.353840] kvdo83:dmsetup: zones: 1 logical, 1 physical, 1 hash; base threads: 5
[70324.590221] kvdo83:journalQ: VDO commencing normal operation
[70324.615822] kvdo83:dmsetup: uds: kvdo83:dedupeQ: loading or rebuilding index: dev=/dev/disk/by-id/scsi-360fff19abdd9b56dfb9bf59625f4c9f7-part1 offset=4096 size=2781704192
uds: kvdo83:dedupeQ: Using 6 indexing zones for concurrency.

[70324.708282] Setting UDS index target state to online
[70324.731296] kvdo83:dmsetup: device 'vdo' started
[70324.752139] kvdo83:dmsetup: resuming device 'vdo'
[70324.773667] kvdo83:dmsetup: device 'vdo' resumed
[70324.828335] kvdo83:packerQ: compression is enabled
[70325.161337] uds: kvdo83:dedupeQ: read index page map, last update 0
[70325.190981] uds: kvdo83:dedupeQ: index_0: loaded index from chapter 0 through chapter 0
[70332.657472] kvdo83:dmsetup: Preparing to resize physical to 13107200
[70332.689441] kvdo83:dmsetup: Done preparing to resize physical
[70332.723341] kvdo83:dmsetup: suspending device 'vdo'
[70332.746569] kvdo83:dmsetup: device 'vdo' suspended
[70335.169256] kvdo83:dmsetup: Physical block count was 13107200, now 13107200
[70335.218601] kvdo83:dmsetup: resuming device 'vdo'
[70335.239682] kvdo83:dmsetup: device 'vdo' resumed

There is no error and the growth is successful, but the grow message in /var/log/messages appears to be wrong. The first (failed) growPhysical right after vdo creation report the vdo size to be 9765625, but the second (successful) after the resize report 0 growth (from 13107200 to 13107200).

Matthew, do you prefer to fix it under this BZ or create a new one (and fix it later) as the core of this BZ is fixed?

Comment 8 Jakub Krysl 2018-07-11 09:05:20 UTC
I tested again with kmod-kvdo-6.1.1.91 and vdo-6.1.1.91 to see what happens if growing physical without growing the layer under, as this fix removed the check for it.

# vdo growPhysical --name vdo --verbose
    dmsetup status vdo
    dmsetup message vdo 0 prepareToGrowPhysical
    dmsetup suspend vdo
    dmsetup message vdo 0 growPhysical
    vdodumpconfig /dev/disk/by-id/scsi-360fff19abdd9b56dfb9bf59625f4c9f7-part1
    dmsetup resume vdo

/var/log/messages:
[171474.547355] kvdo2:dmsetup: Preparing to resize physical to 13107200
[171474.581461] kvdo2:dmsetup: Done preparing to resize physical
[171474.614708] kvdo2:dmsetup: suspending device 'vdo'
[171474.637274] kvdo2:dmsetup: device 'vdo' suspended
[171474.665569] kvdo2:dmsetup: Physical block count was 13107200, now 13107200
[171474.724816] kvdo2:dmsetup: resuming device 'vdo'
[171474.747035] kvdo2:dmsetup: device 'vdo' resumed

VDO handles this gracefully by growing of 0 blocks and not failing at all.

The thing is this behaviour is not consistent with other kmod-kvdo versions, where version before (kmod-kvdo-6.1.0.171-16) and after (kmod-kvdo-6.1.1.99) fail with failing ioctl 'Invalid argument'.

Comment 9 Jakub Krysl 2018-07-11 18:48:06 UTC
As this does not introduce any regression on 7.6 (because 6.1.1.91 is not in 7.6 and it is working as expected in 6.1.1.99), setting to verified.

Comment 12 errata-xmlrpc 2018-10-30 09:39:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3094


Note You need to log in before you can comment on or make changes to this bug.