Bug 1857147
| Summary: | lvm vdo memory consumption is way higher than documented | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Roman Bednář <rbednar> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Command-line tools | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | unspecified | ||
| Priority: | medium | CC: | agk, awalsh, bgurney, cmarthal, heinzm, jbrassow, mcsontos, msnitzer, pasik, prajnoha, thornber, zkabelac |
| Version: | 8.3 | Flags: | pm-rhel:
mirror+
|
| Target Milestone: | rc | ||
| Target Release: | 8.0 | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.03.12-1.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-11-09 19:45:20 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Roman Bednář
2020-07-15 09:06:59 UTC
Using vdo only seems to show same results with same parameters - 200TB physical/4PB logical/32GB slab/60GB ram consumption.
So we can rule out lvm overhead.
# lvs vg -o lv_name,lv_size
LV LSize
linear <200.09t
# free -h
total used free shared buff/cache available
Mem: 376Gi 1.9Gi 374Gi 10Mi 369Mi 372Gi
Swap: 4.0Gi 0B 4.0Gi
# vdo create -n vdovol --device /dev/vg/linear --vdoSlabSize 32768 --vdoLogicalSize 4294967296
Creating VDO vdovol
The VDO volume can address 200 TB in 6402 data slabs, each 32 GB.
It can grow to address at most 256 TB of physical storage in 8192 slabs.
Starting VDO vdovol
Starting compression on VDO vdovol
VDO instance 3 volume is ready at /dev/mapper/vdovol
# free -h
total used free shared buff/cache available
Mem: 376Gi 61Gi 314Gi 10Mi 371Mi 313Gi
Swap: 4.0Gi 0B 4.0Gi
Is lvmvdo configuring an additional amount of block map cache? Remember that the block map cache uses memory to cache an amount of the block map, to reduce the amount of block map reads. The default block map cache size is 128 MiB.
Example with the default block map cache configured (128 MiB):
# free -m; vdo create --name=vdo1 --device=/dev/nvme0n1p3 --vdoLogicalSize=1024T; free -m
total used free shared buff/cache available
Mem: 128638 25947 3203 73 99487 101393
Swap: 4095 10 4085
Creating VDO vdo1
The VDO volume can address 1020 GB in 510 data slabs, each 2 GB.
It can grow to address at most 16 TB of physical storage in 8192 slabs.
If a larger maximum size might be needed, use bigger slabs.
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance 3 volume is ready at /dev/mapper/vdo1
total used free shared buff/cache available
Mem: 128638 28069 1081 73 99487 99260
Swap: 4095 10 4085
Example with a user-specified block map cache configured (16 GiB):
# free -m; vdo create --name=vdo1 --device=/dev/nvme0n1p3 --vdoLogicalSize=1024T --blockMapCacheSize=16384; free -m
total used free shared buff/cache available
Mem: 128638 25949 3201 73 99487 101392
Swap: 4095 10 4085
Creating VDO vdo1
The VDO volume can address 1020 GB in 510 data slabs, each 2 GB.
It can grow to address at most 16 TB of physical storage in 8192 slabs.
If a larger maximum size might be needed, use bigger slabs.
Starting VDO vdo1
Starting compression on VDO vdo1
VDO instance 4 volume is ready at /dev/mapper/vdo1
total used free shared buff/cache available
Mem: 128638 46894 809 73 80934 80435
Swap: 4095 10 4085
Nope, lvmvdo is using defaults from lvm.conf (128MB). Increasing it results in higher memory consumption as you showed, although on scale the difference is more significant (up to 18GB more memory).
Equivalent example in lvmvdo:
# free -h
total used free shared buff/cache available
Mem: 376Gi 1.9Gi 373Gi 10Mi 413Mi 372Gi
Swap: 4.0Gi 0B 4.0Gi
===Default block map cache 128MB===
# grep vdo_block_map_cache_size_mb /etc/lvm/lvm.conf
# Configuration option allocation/vdo_block_map_cache_size_mb.
# vdo_block_map_cache_size_mb = 128
# lvcreate --type vdo -L200T -V4095T vg
WARNING: vdo signature detected on /dev/vg/vpool0 at offset 0. Wipe it? [y/n]: y
Wiping vdo signature on /dev/vg/vpool0.
The VDO volume can address 199 TB in 6399 data slabs, each 32 GB.
It can grow to address at most 256 TB of physical storage in 8192 slabs.
Logical volume "lvol0" created.
# free -h
total used free shared buff/cache available
Mem: 376Gi 61Gi 314Gi 10Mi 421Mi 313Gi
Swap: 4.0Gi 0B 4.0Gi
===Increased block map cache to 16384MB - memory consumption went up by ~18GB===
# grep vdo_block_map_cache_size_mb /etc/lvm/lvm.conf
# Configuration option allocation/vdo_block_map_cache_size_mb.
# vdo_block_map_cache_size_mb = 128
vdo_block_map_cache_size_mb = 16384
# lvcreate --type vdo -L200T -V4095T vg
WARNING: vdo signature detected on /dev/vg/vpool0 at offset 0. Wipe it? [y/n]: y
Wiping vdo signature on /dev/vg/vpool0.
The VDO volume can address 199 TB in 6399 data slabs, each 32 GB.
It can grow to address at most 256 TB of physical storage in 8192 slabs.
Logical volume "lvol0" created.
# free -h
total used free shared buff/cache available
Mem: 376Gi 79Gi 296Gi 10Mi 420Mi 295Gi
Swap: 4.0Gi 0B 4.0Gi
With this upstream commit: https://www.redhat.com/archives/lvm-devel/2020-November/msg00000.html man page lvmvdo has been enhanced with description of memory usage of VDO kernel dm target module. Marking Verified:Tested in the latest rpms. kernel-4.18.0-310.el8 BUILT: Thu May 27 14:24:00 CDT 2021 lvm2-2.03.12-2.el8 BUILT: Tue Jun 1 06:55:37 CDT 2021 lvm2-libs-2.03.12-2.el8 BUILT: Tue Jun 1 06:55:37 CDT 2021 Section listed in comment #5 is present: 6. Memory usage The VDO target requires 370 MiB of RAM plus an additional 268 MiB per each 1 TiB of physical storage managed by the volume. UDS requires a minimum of 250 MiB of RAM, which is also the default amount that deduplication uses. The memory required for the UDS index is determined by the index type and the required size of the deduplication window and is controlled by the allocation/vdo_use_sparse_index setting. With enabled UDS sparse indexing, it relies on the temporal locality of data and attempts to retain only the most relevant index entries in memory and can maintain a deduplication window that is ten times larger than with dense while using the same amount of memory. Although the sparse index provides the greatest coverage, the dense index provides more deduplication advice. For most workloads, given the same amount of memory, the difference in dedu†plication rates between dense and sparse indexes is negligible. A dense index with 1 GiB of RAM maintains a 1 TiB deduplication window, while a sparse index with 1 GiB of RAM maintains a 10 TiB deduplication window. In general, 1 GiB is sufficient for 4 TiB of physical space with a dense index and 40 TiB with a sparse index. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:4431 |