Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1857140

Summary: suspiciously high min size for raid to vdo pool conversion
Product: Red Hat Enterprise Linux 8 Reporter: Roman Bednář <rbednar>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Changing Logical Volumes QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: medium CC: agk, cmarthal, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, zkabelac
Version: 8.3Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: 8.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: lvm2-2.03.12-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-11-09 19:45:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
lvconvert_vvvv none

Description Roman Bednář 2020-07-15 08:54:42 UTC
Converting raid1 to lvm vdo seems to require unexpectedly high lv size (~37G).

Not really sure if it's a real bug or expected behavior. If it's expected it should be probably explained somewhere in docs.


# lvs -a -o lv_name,lv_size,segtype
  LV                       LSize  Type
  split_vdopool            10.00g raid1
  [split_vdopool_rimage_0] 10.00g linear
  [split_vdopool_rimage_1] 10.00g linear
  [split_vdopool_rimage_2] 10.00g linear
  [split_vdopool_rmeta_0]   4.00m linear
  [split_vdopool_rmeta_1]   4.00m linear
  [split_vdopool_rmeta_2]   4.00m linear

# lvconvert --yes --type vdo-pool -n vdolv vg/split_vdopool
  WARNING: Converting logical volume split_vdopool to VDO pool volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Minimum required size for VDO volume: 37276176384 bytes
vdoformat: formatVDO failed on '/dev/vg/split_vdopool': VDO Status: Out of space
  Command /usr/bin/vdoformat failed.
  Cannot format VDO pool volume vg/split_vdopool.

# lvextend -L37G vg/split_vdopool
  Extending 3 mirror images.
  Size of logical volume vg/split_vdopool changed from 10.00 GiB (2560 extents) to 37.00 GiB (9472 extents).
  Logical volume vg/split_vdopool successfully resized.

# lvconvert --yes --type vdo-pool -n vdolv vg/split_vdopool
  WARNING: Converting logical volume split_vdopool to VDO pool volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
    Logical blocks defaulted to 8375795 blocks.
    The VDO volume can address 32 GB in 1 data slab.
    It can grow to address at most 256 TB of physical storage in 8192 slabs.
  Logical volume "vdolv" created.
  Converted vg/split_vdopool to VDO pool volume and created virtual vg/vdolv VDO volume.

# lvs -a -o lv_name,lv_size,segtype
  LV                             LSize   Type
  split_vdopool                   37.00g vdo-pool
  [split_vdopool_vdata]           37.00g raid1
  [split_vdopool_vdata_rimage_0]  37.00g linear
  [split_vdopool_vdata_rimage_1]  37.00g linear
  [split_vdopool_vdata_rimage_2]  37.00g linear
  [split_vdopool_vdata_rmeta_0]    4.00m linear
  [split_vdopool_vdata_rmeta_1]    4.00m linear
  [split_vdopool_vdata_rmeta_2]    4.00m linear
  vdolv                          <31.95g vdo



kernel-4.18.0-224.el8.x86_64
kmod-kvdo-6.2.3.107-73.el8.x86_64
lvm2-2.03.09-3.el8.x86_64
lvm2-libs-2.03.09-3.el8.x86_64
vdo-6.2.3.107-14.el8.x86_64

Comment 2 Zdenek Kabelac 2020-10-11 07:59:45 UTC
From this message: 'The VDO volume can address 32 GB in 1 data slab'  I'd have expected either some profile
or lvm.conf provides   vdo_slab_size = 32768   (32GiB).
VDOPool then must containe at least 1 such slub.

So is this the reason why such high size was necessary?
(We can possibly enhance messaging report from command for such case)

Also - please provide  -vvvv trace from failing command which shortens this 'guessing' phase.

Comment 3 Roman Bednář 2020-11-25 13:08:12 UTC
Created attachment 1733336 [details]
lvconvert_vvvv

Can't reproduce on latest 8.4 nightly. However the lvconvert throws device-mapper ioctl error. Attaching -vvvv output.

# lvs -a -o lv_name,lv_size,segtype
  LV                       LSize   Type  
  root                      <6.20g linear
  swap                     820.00m linear
  split_vdopool             10.00g raid1 
  [split_vdopool_rimage_0]  10.00g linear
  [split_vdopool_rimage_1]  10.00g linear
  [split_vdopool_rimage_2]  10.00g linear
  [split_vdopool_rmeta_0]    4.00m linear
  [split_vdopool_rmeta_1]    4.00m linear
  [split_vdopool_rmeta_2]    4.00m linear

# lvconvert --yes --type vdo-pool -n vdolv vg/split_vdopool
  WARNING: Converting logical volume split_vdopool to VDO pool volume.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
    Logical blocks defaulted to 1569686 blocks.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  device-mapper: remove ioctl on  (253:8) failed: Device or resource busy
  Logical volume "vdolv" created.
  Converted vg/split_vdopool to VDO pool volume and created virtual vg/vdolv VDO volume.

# lvs -a -o lv_name,lv_size,segtype
  LV                             LSize   Type    
  root                            <6.20g linear  
  swap                           820.00m linear  
  split_vdopool                   10.00g vdo-pool
  [split_vdopool_vdata]           10.00g raid1   
  [split_vdopool_vdata_rimage_0]  10.00g linear  
  [split_vdopool_vdata_rimage_1]  10.00g linear  
  [split_vdopool_vdata_rimage_2]  10.00g linear  
  [split_vdopool_vdata_rmeta_0]    4.00m linear  
  [split_vdopool_vdata_rmeta_1]    4.00m linear  
  [split_vdopool_vdata_rmeta_2]    4.00m linear  
  vdolv                            5.98g vdo     


kernel-4.18.0-252.el8.x86_64
kernel-core-4.18.0-252.el8.x86_64
kernel-devel-4.18.0-252.el8.x86_64
kernel-headers-4.18.0-252.el8.x86_64
kernel-modules-4.18.0-252.el8.x86_64
kernel-modules-extra-4.18.0-252.el8.x86_64
kernel-tools-4.18.0-252.el8.x86_64
kernel-tools-libs-4.18.0-252.el8.x86_64
kmod-kvdo-6.2.4.26-75.el8.x86_64
lvm2-2.03.11-0.2.20201103git8801a86.el8.x86_64
lvm2-libs-2.03.11-0.2.20201103git8801a86.el8.x86_64
lvm2-lockd-2.03.11-0.2.20201103git8801a86.el8.x86_64
vdo-6.2.4.14-14.el8.x86_64

Comment 4 Zdenek Kabelac 2021-02-13 21:40:57 UTC
I believe this was already fixed by commit 46d15b5e4d2830fce313d7a58a1498d61b7a8f86

https://listman.redhat.com/archives/lvm-devel/2020-August/msg00015.html

Member of 2.03.11 build.

Comment 9 Corey Marthaler 2021-06-14 17:39:13 UTC
No "device-mapper: remove ioctl" error found in the latest rpms. Adding QA ack for 8.5 as well as marking Verified:Tested in the latest rpms.

kernel-4.18.0-310.el8    BUILT: Thu May 27 14:24:00 CDT 2021
lvm2-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021
lvm2-libs-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021


[root@hayes-01 ~]# lvcreate --type raid1 -m2 -L 10G -n split_vdopool vg
  Logical volume "split_vdopool" created.
[root@hayes-01 ~]# lvs -a -o lv_name,lv_size,segtype
  LV                       LSize  Type  
  split_vdopool            10.00g raid1 
  [split_vdopool_rimage_0] 10.00g linear
  [split_vdopool_rimage_1] 10.00g linear
  [split_vdopool_rimage_2] 10.00g linear
  [split_vdopool_rmeta_0]   4.00m linear
  [split_vdopool_rmeta_1]   4.00m linear
  [split_vdopool_rmeta_2]   4.00m linear
[root@hayes-01 ~]# lvconvert --yes --type vdo-pool -n vdolv vg/split_vdopool
  WARNING: Converting logical volume vg/split_vdopool to VDO pool volume with formating.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
    Logical blocks defaulted to 1569686 blocks.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdolv" created.
  Converted vg/split_vdopool to VDO pool volume and created virtual vg/vdolv VDO volume.
[root@hayes-01 ~]# lvs -a -o lv_name,lv_size,segtype
  LV                             LSize  Type    
  split_vdopool                  10.00g vdo-pool
  [split_vdopool_vdata]          10.00g raid1   
  [split_vdopool_vdata_rimage_0] 10.00g linear  
  [split_vdopool_vdata_rimage_1] 10.00g linear  
  [split_vdopool_vdata_rimage_2] 10.00g linear  
  [split_vdopool_vdata_rmeta_0]   4.00m linear  
  [split_vdopool_vdata_rmeta_1]   4.00m linear  
  [split_vdopool_vdata_rmeta_2]   4.00m linear  
  vdolv                           5.98g vdo

Comment 14 Corey Marthaler 2021-06-24 02:08:18 UTC
Verified in the latest rpms as well.


kernel-4.18.0-314.el8    BUILT: Tue Jun 15 11:04:32 CDT 2021
lvm2-2.03.12-4.el8    BUILT: Tue Jun 22 03:35:27 CDT 2021
lvm2-libs-2.03.12-4.el8    BUILT: Tue Jun 22 03:35:27 CDT 2021


[root@hayes-03 ~]# lvcreate --type raid1 -m2 -L 10G -n split_vdopool vg
  Logical volume "split_vdopool" created.
[root@hayes-03 ~]# lvconvert --yes --type vdo-pool -n vdolv vg/split_vdopool
  WARNING: Converting logical volume vg/split_vdopool to VDO pool volume with formating.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
    Logical blocks defaulted to 1569686 blocks.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "vdolv" created.
  Converted vg/split_vdopool to VDO pool volume and created virtual vg/vdolv VDO volume.

Comment 17 errata-xmlrpc 2021-11-09 19:45:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4431