Bug 1683952
| Summary: | Native vdo pool autoextend not working | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | Roman Bednář <rbednar> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Other | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | unspecified | ||
| Priority: | high | CC: | agk, awalsh, heinzm, jbrassow, mcsontos, msnitzer, pasik, pberanek, prajnoha, zkabelac |
| Version: | 8.0 | Flags: | pm-rhel:
mirror+
|
| Target Milestone: | rc | ||
| Target Release: | 8.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.03.11-0.2.20201103git8801a86.el8 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-05-18 15:01:41 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Roman Bednář
2019-02-28 08:00:08 UTC
So the extension support of VDOPOOLs came with version 2.03.02. But it turns out there were some corner cases that needed further enhancements (especially when used small size). The existing version required the user to take care about handling respective 'size increment jump' - to be at least the size of one VDO slab. If the the size of extension is 'smaller' - the resize is basically rejected by kernel target. Issue has been addressed by this recent upstream commit (part of 2.03.11 release) https://www.redhat.com/archives/lvm-devel/2020-September/msg00138.html With this commit the resize experience is way better for a user as the lvm2 takes care about figuring out the minimal size increment required to make VDOPool bigger. It's also worth to mention the 'automatic' extension of VDOPOOL size is also limited by this problem - this will affect users with smaller VDOPOOL sizes where defined policy amount percentage of current VDOPOOL size will be smaller then VDO Slab size. Problem can be eliminated by increasing policy amount, or using bigger VDOPOOL for which 'percentage' will result in bigger growth. qa ack? Verified for the latest .n tree (RHEL-8.4.0-20210107.n.0):
# rpm -qa | egrep "lvm2|vdo|kernel" | sort
kernel-4.18.0-269.el8.x86_64
kernel-core-4.18.0-269.el8.x86_64
kernel-devel-4.18.0-269.el8.x86_64
kernel-headers-4.18.0-269.el8.x86_64
kernel-modules-4.18.0-269.el8.x86_64
kernel-modules-extra-4.18.0-269.el8.x86_64
kernel-tools-4.18.0-269.el8.x86_64
kernel-tools-libs-4.18.0-269.el8.x86_64
kmod-kvdo-6.2.4.26-76.el8.x86_64
lvm2-2.03.11-0.4.20201222gitb84a992.el8.x86_64
lvm2-libs-2.03.11-0.4.20201222gitb84a992.el8.x86_64
lvm2-lockd-2.03.11-0.4.20201222gitb84a992.el8.x86_64
vdo-6.2.4.14-14.el8.x86_64
# lvmconfig | grep vdo_pool_autoextend
vdo_pool_autoextend_threshold=70 # vdo_pool_autoextend_percent is default (20%)
# lvcreate --vdo --size 10G --name lvol0 vg1
Logical blocks defaulted to 1569686 blocks.
The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
It can grow to address at most 16 TB of physical storage in 8192 slabs.
If a larger maximum size might be needed, use bigger slabs.
Logical volume "lvol0" created.
# mkfs.ext4 -E nodiscard /dev/vg1/lvol0
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 1568768 4k blocks and 392448 inodes
Filesystem UUID: 2f3d6a68-cb39-4617-9ef4-88c3327cd4a0
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
# mkdir /mnt/lvol0
# mount /dev/vg1/lvol0 /mnt/lvol0
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel_virt-047 -wi-ao---- <6.20g
swap rhel_virt-047 -wi-ao---- 820.00m
lvol0 vg1 vwi-aov--- 5.98g vpool0 0.01
vpool0 vg1 dwi------- 10.00g 40.05
[vpool0_vdata] vg1 Dwi-ao---- 10.00g
# head -c 4G < /dev/urandom > /mnt/lvol0/random_data.bin
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel_virt-047 -wi-ao---- <6.20g
swap rhel_virt-047 -wi-ao---- 820.00m
lvol0 vg1 vwi-aov--- 5.98g vpool0 64.24
vpool0 vg1 dwi------- 12.00g 65.45
[vpool0_vdata] vg1 Dwi-ao---- 12.00g
# head -c 1G < /dev/urandom > /mnt/lvol0/random_data.bin
# lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root rhel_virt-047 -wi-ao---- <6.20g
swap rhel_virt-047 -wi-ao---- 820.00m
lvol0 vg1 vwi-aov--- 5.98g vpool0 83.56
vpool0 vg1 dwi------- 14.40g 65.38
[vpool0_vdata] vg1 Dwi-ao---- 14.40g
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:1659 |