RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1857147 - lvm vdo memory consumption is way higher than documented
Summary: lvm vdo memory consumption is way higher than documented
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.3
Hardware: x86_64
OS: Linux
medium
unspecified
Target Milestone: rc
: 8.0
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-15 09:06 UTC by Roman Bednář
Modified: 2021-11-10 08:51 UTC (History)
12 users (show)

Fixed In Version: lvm2-2.03.12-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-11-09 19:45:20 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2021:4431 0 None None None 2021-11-09 19:45:39 UTC

Description Roman Bednář 2020-07-15 09:06:59 UTC
Looking at the system requirements in docs (Table 1.2), it seems that vdo memory consumption should not be too high (12GB max) while in reality it consumes up to 60GB when configuring vdo with max limits.

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/deduplicating_and_compressing_storage/deploying-vdo_deduplicating-and-compressing-storage#vdo-requirements_deploying-vdo


Initial available memory 374GL:

# free -h
total used free shared buff/cache available
Mem: 376Gi 1.7Gi 374Gi 10Mi 421Mi 372Gi
Swap: 4.0Gi 0B 4.0Gi

Increased slab size to maximum:

# grep slab_size_mb /etc/lvm/lvm.conf
# Configuration option allocation/vdo_slab_size_mb.
# vdo_slab_size_mb = 2048
vdo_slab_size_mb = 32768

Memory consumption examples:

===200TB physical vdo => 54GB memory===

# lvcreate --type vdo -L200TB vg
Logical blocks defaulted to 53597782785 blocks.
The VDO volume can address 199 TB in 6399 data slabs, each 32 GB.
It can grow to address at most 256 TB of physical storage in 8192 slabs.
Logical volume "lvol0" created.

# free -h
total used free shared buff/cache available
Mem: 376Gi 55Gi 320Gi 10Mi 427Mi 319Gi
Swap: 4.0Gi 0B 4.0Gi

===100TB physical vdo => 27GB memory===

# lvcreate --type vdo -L100TB vg
WARNING: vdo signature detected on /dev/vg/vpool0 at offset 0. Wipe it? [y/n]: y
Wiping vdo signature on /dev/vg/vpool0.
fr Logical blocks defaulted to 26794703345 blocks.
The VDO volume can address 99 TB in 3199 data slabs, each 32 GB.
It can grow to address at most 256 TB of physical storage in 8192 slabs.
ee -h Logical volume "lvol0" created.

# free -h
total used free shared buff/cache available
Mem: 376Gi 28Gi 347Gi 10Mi 426Mi 345Gi
Swap: 4.0Gi 0B 4.0Gi

===200TB physical vdo + 4PB logical => 60GB memory===

# lvcreate --type vdo -L200TB -V4095tb vg
The VDO volume can address 199 TB in 6399 data slabs, each 32 GB.
It can grow to address at most 256 TB of physical storage in 8192 slabs.
Logical volume "lvol0" created.

# free -h
total used free shared buff/cache available
Mem: 376Gi 61Gi 314Gi 10Mi 435Mi 313Gi
Swap: 4.0Gi 0B 4.0Gi


Trying default slab size and 'only' 10TB physical storage, memory consumption somewhat normal but still higher than documented:

===10TB physical vdo => 4GB memory===

# lvcreate --type vdo -L10TB vg
WARNING: vdo signature detected on /dev/vg/vpool0 at offset 0. Wipe it? [y/n]: y
Wiping vdo signature on /dev/vg/vpool0.
Logical blocks defaulted to 2678187684 blocks.
The VDO volume can address 9 TB in 5118 data slabs, each 2 GB.
It can grow to address at most 16 TB of physical storage in 8192 slabs.
If a larger maximum size might be needed, use bigger slabs.
Logical volume "lvol0" created.

# free -h
total used free shared buff/cache available
Mem: 376Gi 4.9Gi 370Gi 10Mi 428Mi 369Gi
Swap: 4.0Gi 0B 4.0Gi

# grep slab_size_mb /etc/lvm/lvm.conf
# Configuration option allocation/vdo_slab_size_mb.
# vdo_slab_size_mb = 2048



kernel-4.18.0-224.el8.x86_64
kmod-kvdo-6.2.3.107-73.el8.x86_64
lvm2-2.03.09-3.el8.x86_64
lvm2-libs-2.03.09-3.el8.x86_64
vdo-6.2.3.107-14.el8.x86_64

Comment 1 Roman Bednář 2020-07-15 10:47:50 UTC
Using vdo only seems to show same results with same parameters - 200TB physical/4PB logical/32GB slab/60GB ram consumption.

So we can rule out lvm overhead.

# lvs vg -o lv_name,lv_size
  LV     LSize
  linear <200.09t

# free -h
              total        used        free      shared  buff/cache   available
Mem:          376Gi       1.9Gi       374Gi        10Mi       369Mi       372Gi
Swap:         4.0Gi          0B       4.0Gi


# vdo create -n vdovol --device /dev/vg/linear --vdoSlabSize 32768 --vdoLogicalSize 4294967296
Creating VDO vdovol
      The VDO volume can address 200 TB in 6402 data slabs, each 32 GB.
      It can grow to address at most 256 TB of physical storage in 8192 slabs.
Starting VDO vdovol
Starting compression on VDO vdovol
VDO instance 3 volume is ready at /dev/mapper/vdovol

# free -h
              total        used        free      shared  buff/cache   available
Mem:          376Gi        61Gi       314Gi        10Mi       371Mi       313Gi
Swap:         4.0Gi          0B       4.0Gi

Comment 2 Bryan Gurney 2020-07-15 14:20:03 UTC
Is lvmvdo configuring an additional amount of block map cache?  Remember that the block map cache uses memory to cache an amount of the block map, to reduce the amount of block map reads.  The default block map cache size is 128 MiB.

Example with the default block map cache configured (128 MiB):

# free -m; vdo create --name=vdo1 --device=/dev/nvme0n1p3 --vdoLogicalSize=1024T; free -m
              total        used        free      shared  buff/cache   available
Mem:         128638       25947        3203          73       99487      101393
Swap:          4095          10        4085
Creating VDO vdo1
      The VDO volume can address 1020 GB in 510 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance 3 volume is ready at /dev/mapper/vdo1
              total        used        free      shared  buff/cache   available
Mem:         128638       28069        1081          73       99487       99260
Swap:          4095          10        4085


Example with a user-specified block map cache configured (16 GiB):

# free -m; vdo create --name=vdo1 --device=/dev/nvme0n1p3 --vdoLogicalSize=1024T --blockMapCacheSize=16384; free -m
              total        used        free      shared  buff/cache   available
Mem:         128638       25949        3201          73       99487      101392
Swap:          4095          10        4085
Creating VDO vdo1
      The VDO volume can address 1020 GB in 510 data slabs, each 2 GB.
      It can grow to address at most 16 TB of physical storage in 8192 slabs.
      If a larger maximum size might be needed, use bigger slabs.
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance 4 volume is ready at /dev/mapper/vdo1
              total        used        free      shared  buff/cache   available
Mem:         128638       46894         809          73       80934       80435
Swap:          4095          10        4085

Comment 3 Roman Bednář 2020-07-16 07:23:37 UTC
Nope, lvmvdo is using defaults from lvm.conf (128MB). Increasing it results in higher memory consumption as you showed, although on scale the difference is more significant (up to 18GB more memory).
 
Equivalent example in lvmvdo:

# free -h
              total        used        free      shared  buff/cache   available
Mem:          376Gi       1.9Gi       373Gi        10Mi       413Mi       372Gi
Swap:         4.0Gi          0B       4.0Gi

===Default block map cache 128MB===

# grep vdo_block_map_cache_size_mb /etc/lvm/lvm.conf
	# Configuration option allocation/vdo_block_map_cache_size_mb.
	# vdo_block_map_cache_size_mb = 128

# lvcreate --type vdo -L200T -V4095T vg
WARNING: vdo signature detected on /dev/vg/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/vg/vpool0.
    The VDO volume can address 199 TB in 6399 data slabs, each 32 GB.
    It can grow to address at most 256 TB of physical storage in 8192 slabs.
  Logical volume "lvol0" created.

# free -h
              total        used        free      shared  buff/cache   available
Mem:          376Gi        61Gi       314Gi        10Mi       421Mi       313Gi
Swap:         4.0Gi          0B       4.0Gi


===Increased block map cache to 16384MB - memory consumption went up by ~18GB===

# grep vdo_block_map_cache_size_mb /etc/lvm/lvm.conf
	# Configuration option allocation/vdo_block_map_cache_size_mb.
	# vdo_block_map_cache_size_mb = 128
	vdo_block_map_cache_size_mb = 16384

# lvcreate --type vdo -L200T -V4095T vg
WARNING: vdo signature detected on /dev/vg/vpool0 at offset 0. Wipe it? [y/n]: y
  Wiping vdo signature on /dev/vg/vpool0.
    The VDO volume can address 199 TB in 6399 data slabs, each 32 GB.
    It can grow to address at most 256 TB of physical storage in 8192 slabs.
  Logical volume "lvol0" created.

# free -h
              total        used        free      shared  buff/cache   available
Mem:          376Gi        79Gi       296Gi        10Mi       420Mi       295Gi
Swap:         4.0Gi          0B       4.0Gi

Comment 5 Zdenek Kabelac 2020-11-18 12:03:03 UTC
With this upstream commit:

https://www.redhat.com/archives/lvm-devel/2020-November/msg00000.html

man page lvmvdo has been enhanced with description of memory usage of VDO kernel dm target module.

Comment 7 Corey Marthaler 2021-06-01 21:49:01 UTC
Marking Verified:Tested in the latest rpms.

kernel-4.18.0-310.el8    BUILT: Thu May 27 14:24:00 CDT 2021
lvm2-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021
lvm2-libs-2.03.12-2.el8    BUILT: Tue Jun  1 06:55:37 CDT 2021


Section listed in comment #5 is present:

   6. Memory usage
       The VDO target requires 370 MiB of RAM plus an additional 268 MiB per each 1 TiB of physical storage managed by the volume.

       UDS requires a minimum of 250 MiB of RAM, which is also the default amount that deduplication uses.

       The memory required for the UDS index is determined by the index type and the required size of the deduplication window and is controlled by the allocation/vdo_use_sparse_index setting.

       With  enabled  UDS sparse indexing, it relies on the temporal locality of data and attempts to retain only the most relevant index entries in memory and can maintain a deduplication window
       that is ten times larger than with dense while using the same amount of memory.

       Although the sparse index provides the greatest coverage, the dense index provides more deduplication advice.  For most workloads, given the same amount of memory, the difference in  deduâ€
       plication rates between dense and sparse indexes is negligible.

       A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window, while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication window.  In general, 1 GiB is sufficient for
       4 TiB of physical space with a dense index and 40 TiB with a sparse index.

Comment 12 errata-xmlrpc 2021-11-09 19:45:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4431


Note You need to log in before you can comment on or make changes to this bug.