RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2114006 - Creating vdo without specifying virtual size fails.
Summary: Creating vdo without specifying virtual size fails.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: lvm2
Version: 9.1
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: Filip Suba
URL:
Whiteboard:
Depends On:
Blocks: 2121429
TreeView+ depends on / blocked
 
Reported: 2022-08-02 14:29 UTC by Filip Suba
Modified: 2025-05-03 04:25 UTC (History)
13 users (show)

Fixed In Version: lvm2-2.03.17-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2121429 (view as bug list)
Environment:
Last Closed: 2023-05-09 08:23:40 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
dmesg_output.txt (1.43 KB, text/plain)
2022-08-02 14:29 UTC, Filip Suba
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CLUSTERQE-6177 0 None None None 2022-11-18 15:03:21 UTC
Red Hat Issue Tracker RHELPLAN-129910 0 None None None 2022-08-02 14:37:23 UTC
Red Hat Product Errata RHBA-2023:2544 0 None None None 2023-05-09 08:23:57 UTC

Description Filip Suba 2022-08-02 14:29:33 UTC
Created attachment 1902876 [details]
dmesg_output.txt

Description of problem:
Creating vdo without specifying virtual size fails. I could not reproduce this issue with kmod-kvdo-8.1.1.371-41.el9 and vdo-8.1.1.360-1.el9. In the attachments you can find dmesg output.


Version-Release number of selected component (if applicable):
kmod-kvdo-8.2.0.2-41.el9
vdo-8.2.0.2-1.el9

How reproducible:
always

Steps to Reproduce:
1. truncate -s 15g loop
2. losetup loop0 loop
3. vgcreate vg /dev/loop0
4. lvcreate --type vdo -l 100%FREE -n vdo vg/vdo_pool

Actual results:
vdo is not created.


Expected results:
vdo is created successfully.


Additional info:
# truncate -s 15g loop
# losetup loop0 loop
# vgcreate vg /dev/loop0
  Physical volume "/dev/loop0" successfully created.
  Creating devices file /etc/lvm/devices/system.devices
  Volume group "vg" successfully created
# lvcreate --type vdo -l 100%FREE -n vdo vg/vdo_pool
    Logical blocks defaulted to 3139552 blocks.
    The VDO volume can address 12 GB in 6 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  device-mapper: reload ioctl on  (252:1) failed: Input/output error

If virtual size is specified, vdo is successfully created:

# lvcreate --type vdo -L 20G vgloop0
    Logical blocks defaulted to 4186130 blocks.
    The VDO volume can address 16 GB in 8 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  device-mapper: reload ioctl on  (253:10) failed: Input/output error
  Failed to activate new LV vgloop0/lvol0.
# lvcreate --type vdo -L 20G -V 100G vgloop0
WARNING: vdo signature detected on /dev/vgloop0/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/vgloop0/vpool0.
    The VDO volume can address 16 GB in 8 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

Comment 1 Corey Marthaler 2022-08-05 16:37:50 UTC
This blocks lvm regression testing currently.

Comment 3 Zdenek Kabelac 2022-08-09 15:16:26 UTC
The problems comes from incompatibility with older VDO target driver - where now the driver requires 'strict' match for formated size,
while previously it was ok to go with 'lvm2 extent aligned' size.

The issue was ATM fixed on lvm2 side - by double calling  'vdoformat' to obtain estimation of the size in the 1st. call and format with aligned size with 2nd. call.

Introduced in commit https://listman.redhat.com/archives/lvm-devel/2022-July/024234.html

(assuming this patches set is part of upcoming lvm2 release).

Although the VDO target driver should be still fixed  to keep backward compatibility with older user space binaries.

Comment 7 Corey Marthaler 2022-08-25 16:22:31 UTC
This is verified fixed in the latest kmod-kvdo, so does that mean we don't need a lvm2 build for this until 9.2?

kernel-5.14.0-157.el9    BUILT: Wed Aug 24 05:00:50 PM CDT 2022
lvm2-2.03.16-3.el9    BUILT: Mon Aug  1 04:42:35 AM CDT 2022
vdo-8.2.0.2-1.el9    BUILT: Tue Jul 19 02:28:15 PM CDT 2022
kmod-kvdo-8.2.0.18-46.el9    BUILT: Thu Aug 25 01:53:52 AM CDT 2022


[root@hayes-02 ~]# vgcreate vg /dev/sdj1
  Volume group "vg" successfully created
[root@hayes-02 ~]# lvcreate --type vdo -l 100%FREE -n vdo vg/vdo_pool
WARNING: vdo signature detected on /dev/vg/vdo_pool at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/vg/vdo_pool.
    Logical blocks defaulted to 486134360 blocks.
    The VDO volume can address 1 TB in 929 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdo" created.

Comment 20 Corey Marthaler 2022-12-15 20:12:44 UTC
Verified in the latest rpms as well.

kernel-5.14.0-205.el9    BUILT: Fri Dec  2 07:14:37 AM CST 2022
lvm2-2.03.17-3.el9    BUILT: Wed Dec  7 10:41:40 AM CST 2022
lvm2-libs-2.03.17-3.el9    BUILT: Wed Dec  7 10:41:40 AM CST 2022

Comment 22 errata-xmlrpc 2023-05-09 08:23:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2544

Comment 23 Tigerblue77 2024-11-25 18:34:11 UTC
Hello,
Sorry to reopen this, I was able to recreate the same error on a debian 12.8 machine, kernel 6.11.0-1. LVM version :

  LVM version:     2.03.16(2) (2022-05-18)
  Library version: 1.02.185 (2022-05-18)
  Driver version:  4.48.0

vgs :

  VG   #PV #LV #SN Attr   VSize    VFree
  VG-1   1   2   0 wz--n-  <40.02t 39.92t
  VG-2   1   0   0 wz--n-   <1.82t <1.82t
  pve    1   2   0 wz--n- <118.08g 14.75g

LVM profile :

# Custom configuration for VDO using compression/deduplication and more CPUs (based on local vCPU count)
# Custom parameters based on: https://is.muni.cz/th/rq7e2/petrovic_diploma_thesis.pdf

allocation {
        vdo_use_compression=1
        vdo_use_deduplication=1
        vdo_use_metadata_hints=1
        vdo_minimum_io_size=4096
        vdo_block_map_cache_size_mb=8192
        vdo_block_map_period=16380
        vdo_check_point_frequency=0
        vdo_use_sparse_index=0
        vdo_index_memory_size_mb=256
        vdo_slab_size_mb=8192 #16384 #32768
        vdo_ack_threads=8
        vdo_bio_threads=32
        vdo_bio_rotation=64
        vdo_cpu_threads=16
        vdo_hash_zone_threads=8
        vdo_logical_threads=8
        vdo_physical_threads=16
        vdo_write_policy="auto"
        vdo_max_discard=1
}

then : lvcreate --type vdo --name VDO-LV-1 --size 100G --virtualsize 200G --metadataprofile vdo-compressed-deduplicated VG-1

Output for slabs <= 4096MB :

WARNING: vdo signature detected on /dev/VG-1/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/VG-1/vpool0.
    The VDO volume can address 96.00 GB in 24 data slabs, each 4.00 GB.
    It can grow to address at most 32.00 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "VDO-LV-1" created.

Output for slabs > 4096MB :

WARNING: vdo signature detected on /dev/VG-1/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/VG-1/vpool0.
    The VDO volume can address 96.00 GB in 3 data slabs, each 32.00 GB.
    It can grow to address at most 256.00 TB of physical storage in 8192 slabs.
  device-mapper: reload ioctl on  (252:1) failed: Input/output error
  Failed to activate new LV VG-1/VDO-LV-1.

Comment 24 Zdenek Kabelac 2024-11-26 13:06:19 UTC
Have you tried newer version of lvm2 ?

What is in you 'dmesg'  when this error: 'device-mapper: reload ioctl on  (252:1) failed: Input/output error'
is being reported ?

Comment 25 Tigerblue77 2025-01-02 17:50:34 UTC
Hello,
I'm using LVM version which the latest Debian stable brings. Sad it's 2 years old. I didn't try any other version but the problem is not here (read what follows).

First, I have to say that my current vdo-compressed-deduplicated LVM profile is :

allocation {
    vdo_use_compression=1
    vdo_use_deduplication=1
    vdo_use_metadata_hints=1
    vdo_minimum_io_size=4096
    vdo_block_map_cache_size_mb=8192
    vdo_block_map_period=16380
    vdo_check_point_frequency=0
    vdo_use_sparse_index=0
    vdo_index_memory_size_mb=256
    vdo_slab_size_mb=8192
    vdo_ack_threads=8
    vdo_bio_threads=32
    vdo_bio_rotation=64
    vdo_cpu_threads=16
    vdo_hash_zone_threads=8
    vdo_logical_threads=8
    vdo_physical_threads=16
    vdo_write_policy="sync"
    vdo_max_discard=1
}

Here is the content of my "dmesg" when the error happens (thanks, I didn't know where to search for logs !)

[ 2467.116899] device-mapper: vdo0:lvcreate: table line: V2 /dev/dm-4 26214400 4096 2097152 16380 on sync VG--1-COMPRESSED--DEDUPLICATED--VDO--POOL--1-vpool maxDiscard 1 ack 8 bio 32 bioRotationInterval 64 cpu 16 hash 8 logical 8 physical 16
[ 2467.116908] device-mapper: vdo0:lvcreate: Detected version mismatch between kernel module and tools kernel: 4, tool: 2
[ 2467.116911] device-mapper: vdo0:lvcreate: Please consider upgrading management tools to match kernel.
[ 2467.116924] device-mapper: vdo0:lvcreate: loading device '252:5'
[ 2467.116998] device-mapper: vdo0:lvcreate: zones: 8 logical, 16 physical, 8 hash; total threads: 69
[ 2467.130875] device-mapper: vdo: dm_vdo0:journal: 16 physical zones exceeds slab count 12: VDO Status: Bad configuration option (1468)
[ 2467.130898] device-mapper: vdo0:lvcreate: Could not start VDO device. (VDO error 1468, message Cannot load metadata from device)
[ 2467.172412] device-mapper: vdo0:lvcreate: vdo_status_to_errno: mapping internal status code 1468 (VDO_BAD_CONFIGURATION: VDO Status: Bad configuration option) to EIO
[ 2467.194134] device-mapper: table: 252:5: vdo: Cannot load metadata from device (-EIO)
[ 2467.194143] device-mapper: ioctl: error adding target to table

So I increased the --size argument to have at least 1 slab per vdo_physical_threads (>=32*1024*16 MB, in current case) and everything went well :

The VDO volume can address 576.00 GB in 18 data slabs, each 32.00 GB.
    It can grow to address at most 256.00 TB of physical storage in 8192 slabs.
  Logical volume "COMPRESSED-DEDUPLICATED-VDO-LV-1" created.

This issue can be closed. Thanks for your help !

Comment 26 Red Hat Bugzilla 2025-05-03 04:25:03 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.