RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1803289 - LVM+VDO calculation for slab size is wrong
Summary: LVM+VDO calculation for slab size is wrong
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 8.0
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-15 00:55 UTC by Andy Walsh
Modified: 2021-09-07 11:56 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.03.09-2.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-11-04 02:00:20 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:4546 0 None None None 2020-11-04 02:00:43 UTC

Description Andy Walsh 2020-02-15 00:55:32 UTC
Description of problem:
I noticed when creating a test LVM+VDO volume that the slab size being used with default settings (which should use 2G slabs), it always results in an actual 512M slab size.

I tried with other sizes as well, like 512M and it resulted in 128M slab size.

Version-Release number of selected component (if applicable):
kernel-4.18.0-178.el8.x86_64
lvm2-2.03.08-1.el8.x86_64
kmod-kvdo-6.2.2.117-63.el8.x86_64
vdo-6.2.2.117-13.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Install lvm2, kmod-kvdo, vdo
2. Create an LVM+VDO volume specifying the metadata profile or config option mentioned in the man page.
3. Observe the output indicating the slab size: "The VDO volume can address 7 GB in 14 data slabs, each 512 MB."

Actual results:
[root@localhost ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=2048' vg/vdo_pool 
    The VDO volume can address 7 GB in 14 data slabs, each 512 MB.
    It can grow to address at most 4 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

Expected results:
[root@localhost ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=2048' vg/vdo_pool 
    The VDO volume can address 14 GB in 7 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

Additional info:
I noticed in the -vvv output that it thinks that 2048M slab size is 17-bits.
"  Slab size 2.00 GiB converted to 17 bits."

If I try with a 512M slab size setting in the config file/argument, the result is 15-bits.
"  Slab size 512.00 MiB converted to 15 bits."

Examples:
A 2G slab size should use 19 bits, instead lvm chooses 17 bits.
A 512M slab size should use 17 bits, instead lvm chooses 15 bits.
A 128M slab size should use 15 bits, instead lvm chooses 13 bits.

Comment 1 Zdenek Kabelac 2020-02-15 12:54:12 UTC
ATM lvm2 does not yet have any smart code around the estimation of 'best fitting' values - it's rather more or less targeting 'minimal' values.

i.e.  slab bit size is made by this code:

#define DM_VDO_BLOCK_SIZE	UINT64_C(8)	// 4KiB in sectors
slabbits = 31 - clz(vtp->slab_size_mb / DM_VDO_BLOCK_SIZE * 512)
(see:  lvm2/lib/metadata/vdo_manip.c _format_vdo_pool_data_lv())

We need to come with some optimal design getting usable:

pool size ->  best-vdo-defaults  we think user should be using.


So when user start to control some of the settings - what is the order on how should they be prioritized  - 
can we come with some algorithm.

Also which settings user do change most - or do we need all of 'lvm.conf' settings
also present as regular 'lvcreate' options:

aka  lvcreate  --slab-size=10m --uds-memory-size=500M....

Comment 2 Andy Walsh 2020-02-15 15:12:07 UTC
This BZ is more focused at the accuracy of the setting as it is spelled today.  The default should be 2G slabs, which seems to be reflected in everything I saw in both the configuration files as well as the verbose output of the lvcreate.  The calculation from 2048m in the config to then use as the value for slab-bits is incorrect, and seems to always be two bits off.

Here is the VDO code in the python suite that calculates slab bits:
> https://github.com/dm-vdo/vdo/blob/master/vdo-manager/vdomgmnt/VDOService.py#L1460
>   def _computeSlabBits(self):
>     """Compute the --slab-bits parameter value for the slabSize attribute."""
>     # add some fudge because of imprecision in long arithmetic
>     blocks = self.slabSize.toBlocks()
>     return int(math.log(blocks, 2) + 0.05)

Comment 3 Zdenek Kabelac 2020-02-26 12:35:44 UTC
Conversion (in comment #1) was taking (wrongly) the size stored in MB and instead of '* 2048'  was only doing '* 512' so the resulting number was smaller by factor 4 (2 bits shift)

Fixed with this upstream patch:

https://www.redhat.com/archives/lvm-devel/2020-February/msg00043.html


And updated test suite to validate we get expected size in vdoformat (if new vdoformat is installed)

Comment 7 Petr Beranek 2020-04-15 15:42:57 UTC
Adding QA ack for 8.3. Still reproducible with lvm2-2.03.08-3.el8.x86_64:


[root@virt-422 ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=2048' myvg/vdo_pool
    The VDO volume can address 7 GB in 14 data slabs, each 512 MB.
    It can grow to address at most 4 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

[root@virt-422 ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=512' myvg5/vdo_pool5
    The VDO volume can address 7 GB in 58 data slabs, each 128 MB.
    It can grow to address at most 1 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

[root@virt-422 ~]# lvcreate --type vdo -n myvdolv1 -L 10G -V 100G myvg4/vdo_pool4
    The VDO volume can address 7 GB in 14 data slabs, each 512 MB.                   # default should be 2G
    It can grow to address at most 4 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "myvdolv1" created.

Comment 11 Petr Beranek 2020-06-25 14:32:22 UTC
Verified for


packages:
lvm2-2.03.09-2.el8.x86_64

kernel-tools-4.18.0-214.el8.x86_64
vdo-6.2.3.100-14.el8.x86_64
kernel-core-4.18.0-193.el8.x86_64
kernel-modules-4.18.0-193.el8.x86_64
lvm2-libs-2.03.09-2.el8.x86_64
kernel-core-4.18.0-214.el8.x86_64
kernel-modules-4.18.0-214.el8.x86_64
kernel-4.18.0-214.el8.x86_64
kernel-tools-libs-4.18.0-214.el8.x86_64
kernel-4.18.0-193.el8.x86_64
kmod-kvdo-6.2.3.91-73.el8.x86_64


slab sizes:
128MB
default (2G)
16386MB


tests:
lvcreate, verification of lvcreate `-vvv' output for different slab sizes

Comment 14 errata-xmlrpc 2020-11-04 02:00:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4546


Note You need to log in before you can comment on or make changes to this bug.