Bug 1803289

Summary: LVM+VDO calculation for slab size is wrong
Product: Red Hat Enterprise Linux 8 Reporter: Andy Walsh <awalsh>
Component: lvm2Assignee: Zdenek Kabelac <zkabelac>
lvm2 sub component: Other QA Contact: cluster-qe <cluster-qe>
Status: CLOSED ERRATA Docs Contact:
Severity: unspecified    
Priority: unspecified CC: agk, awalsh, bjohnsto, heinzm, jbrassow, mcsontos, msnitzer, pasik, pberanek, prajnoha, rhandlin, zkabelac
Version: 8.2Flags: pm-rhel: mirror+
Target Milestone: rc   
Target Release: 8.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: lvm2-2.03.09-2.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-11-04 02:00:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Andy Walsh 2020-02-15 00:55:32 UTC
Description of problem:
I noticed when creating a test LVM+VDO volume that the slab size being used with default settings (which should use 2G slabs), it always results in an actual 512M slab size.

I tried with other sizes as well, like 512M and it resulted in 128M slab size.

Version-Release number of selected component (if applicable):
kernel-4.18.0-178.el8.x86_64
lvm2-2.03.08-1.el8.x86_64
kmod-kvdo-6.2.2.117-63.el8.x86_64
vdo-6.2.2.117-13.el8.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Install lvm2, kmod-kvdo, vdo
2. Create an LVM+VDO volume specifying the metadata profile or config option mentioned in the man page.
3. Observe the output indicating the slab size: "The VDO volume can address 7 GB in 14 data slabs, each 512 MB."

Actual results:
[root@localhost ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=2048' vg/vdo_pool 
    The VDO volume can address 7 GB in 14 data slabs, each 512 MB.
    It can grow to address at most 4 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

Expected results:
[root@localhost ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=2048' vg/vdo_pool 
    The VDO volume can address 14 GB in 7 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

Additional info:
I noticed in the -vvv output that it thinks that 2048M slab size is 17-bits.
"  Slab size 2.00 GiB converted to 17 bits."

If I try with a 512M slab size setting in the config file/argument, the result is 15-bits.
"  Slab size 512.00 MiB converted to 15 bits."

Examples:
A 2G slab size should use 19 bits, instead lvm chooses 17 bits.
A 512M slab size should use 17 bits, instead lvm chooses 15 bits.
A 128M slab size should use 15 bits, instead lvm chooses 13 bits.

Comment 1 Zdenek Kabelac 2020-02-15 12:54:12 UTC
ATM lvm2 does not yet have any smart code around the estimation of 'best fitting' values - it's rather more or less targeting 'minimal' values.

i.e.  slab bit size is made by this code:

#define DM_VDO_BLOCK_SIZE	UINT64_C(8)	// 4KiB in sectors
slabbits = 31 - clz(vtp->slab_size_mb / DM_VDO_BLOCK_SIZE * 512)
(see:  lvm2/lib/metadata/vdo_manip.c _format_vdo_pool_data_lv())

We need to come with some optimal design getting usable:

pool size ->  best-vdo-defaults  we think user should be using.


So when user start to control some of the settings - what is the order on how should they be prioritized  - 
can we come with some algorithm.

Also which settings user do change most - or do we need all of 'lvm.conf' settings
also present as regular 'lvcreate' options:

aka  lvcreate  --slab-size=10m --uds-memory-size=500M....

Comment 2 Andy Walsh 2020-02-15 15:12:07 UTC
This BZ is more focused at the accuracy of the setting as it is spelled today.  The default should be 2G slabs, which seems to be reflected in everything I saw in both the configuration files as well as the verbose output of the lvcreate.  The calculation from 2048m in the config to then use as the value for slab-bits is incorrect, and seems to always be two bits off.

Here is the VDO code in the python suite that calculates slab bits:
> https://github.com/dm-vdo/vdo/blob/master/vdo-manager/vdomgmnt/VDOService.py#L1460
>   def _computeSlabBits(self):
>     """Compute the --slab-bits parameter value for the slabSize attribute."""
>     # add some fudge because of imprecision in long arithmetic
>     blocks = self.slabSize.toBlocks()
>     return int(math.log(blocks, 2) + 0.05)

Comment 3 Zdenek Kabelac 2020-02-26 12:35:44 UTC
Conversion (in comment #1) was taking (wrongly) the size stored in MB and instead of '* 2048'  was only doing '* 512' so the resulting number was smaller by factor 4 (2 bits shift)

Fixed with this upstream patch:

https://www.redhat.com/archives/lvm-devel/2020-February/msg00043.html


And updated test suite to validate we get expected size in vdoformat (if new vdoformat is installed)

Comment 7 Petr Beranek 2020-04-15 15:42:57 UTC
Adding QA ack for 8.3. Still reproducible with lvm2-2.03.08-3.el8.x86_64:


[root@virt-422 ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=2048' myvg/vdo_pool
    The VDO volume can address 7 GB in 14 data slabs, each 512 MB.
    It can grow to address at most 4 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

[root@virt-422 ~]# lvcreate --type vdo -L 10G -V 100G --config 'allocation/vdo_slab_size_mb=512' myvg5/vdo_pool5
    The VDO volume can address 7 GB in 58 data slabs, each 128 MB.
    It can grow to address at most 1 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.

[root@virt-422 ~]# lvcreate --type vdo -n myvdolv1 -L 10G -V 100G myvg4/vdo_pool4
    The VDO volume can address 7 GB in 14 data slabs, each 512 MB.                   # default should be 2G
    It can grow to address at most 4 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "myvdolv1" created.

Comment 11 Petr Beranek 2020-06-25 14:32:22 UTC
Verified for


packages:
lvm2-2.03.09-2.el8.x86_64

kernel-tools-4.18.0-214.el8.x86_64
vdo-6.2.3.100-14.el8.x86_64
kernel-core-4.18.0-193.el8.x86_64
kernel-modules-4.18.0-193.el8.x86_64
lvm2-libs-2.03.09-2.el8.x86_64
kernel-core-4.18.0-214.el8.x86_64
kernel-modules-4.18.0-214.el8.x86_64
kernel-4.18.0-214.el8.x86_64
kernel-tools-libs-4.18.0-214.el8.x86_64
kernel-4.18.0-193.el8.x86_64
kmod-kvdo-6.2.3.91-73.el8.x86_64


slab sizes:
128MB
default (2G)
16386MB


tests:
lvcreate, verification of lvcreate `-vvv' output for different slab sizes

Comment 14 errata-xmlrpc 2020-11-04 02:00:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4546