RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1683952 - Native vdo pool autoextend not working
Summary: Native vdo pool autoextend not working
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: lvm2
Version: 8.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: 8.0
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-28 08:00 UTC by Roman Bednář
Modified: 2021-09-07 11:50 UTC (History)
10 users (show)

Fixed In Version: lvm2-2.03.11-0.2.20201103git8801a86.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-18 15:01:41 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)

Description Roman Bednář 2019-02-28 08:00:08 UTC
This is a bug for 8.0 TechPreview (BZ 1638522), bump to next release if needed.


# lvmconfig | grep vdo_pool_autoextend
    vdo_pool_autoextend_threshold=80

# lvs -a
  LV             VG            Attr       LSize    Pool   Origin Data%  
  root           rhel_virt-122 -wi-ao----   <6.20g
  swap           rhel_virt-122 -wi-ao----  820.00m
  lvol0          vg            vwi-a-v--- 1016.00m vpool0        0.00
  vpool0         vg            dwi-ao----    4.00g               75.05   <<<<
  [vpool0_vdata] vg            Dwi-ao----    4.00g

# pvs
  PV         VG            Fmt  Attr PSize   PFree
  /dev/sda   vg            lvm2 a--  <10.00g <6.00g
  /dev/vda2  rhel_virt-122 lvm2 a--   <7.00g     0

# vgs
  VG            #PV #LV #SN Attr   VSize   VFree
  rhel_virt-122   1   2   0 wz--n-  <7.00g     0
  vg              1   2   0 wz--n- <10.00g <6.00g

#### write data to lvol0 to make the pool go beyond 80%

#### warning message in logs
Feb 26 14:13:04 virt-122 lvm[2468]: WARNING: VDO pool vg-vpool0 is now 80.05% full.

#### extension did not happen
# lvs -a
  LV             VG            Attr       LSize    Pool   Origin Data%  
  root           rhel_virt-122 -wi-ao----   <6.20g
  swap           rhel_virt-122 -wi-ao----  820.00m
  lvol0          vg            vwi-aov--- 1016.00m vpool0        20.07
  vpool0         vg            dwi-ao----    4.00g               80.05   <<<<
  [vpool0_vdata] vg            Dwi-ao----    4.00g



lvm2-2.03.02-6.el8.x86_64

Comment 1 Zdenek Kabelac 2020-09-23 09:41:28 UTC
So the extension support of VDOPOOLs came with version  2.03.02.
But it turns out there were some corner cases that needed further enhancements (especially when used small size).

The existing version required the user to take care about handling respective  'size increment jump' - to be at least the size of one VDO slab.
If the the size of extension is 'smaller' - the resize is basically rejected by kernel target.

Issue has been addressed by this recent upstream commit (part of 2.03.11 release)
https://www.redhat.com/archives/lvm-devel/2020-September/msg00138.html

With this commit the resize experience is way better for a user as the lvm2 takes care about figuring out the minimal size increment required to make VDOPool bigger.

It's also worth to mention the 'automatic' extension of VDOPOOL size is also limited by this problem - this will affect users with smaller VDOPOOL sizes
where defined policy amount percentage of current VDOPOOL size will be smaller then VDO Slab size. Problem can be eliminated by increasing policy amount,
or using bigger VDOPOOL for which 'percentage' will result in bigger growth.

Comment 3 Jonathan Earl Brassow 2020-11-13 23:16:23 UTC
qa ack?

Comment 12 Petr Beranek 2021-01-13 14:29:02 UTC
Verified for the latest .n tree (RHEL-8.4.0-20210107.n.0):

# rpm -qa | egrep "lvm2|vdo|kernel" | sort
kernel-4.18.0-269.el8.x86_64
kernel-core-4.18.0-269.el8.x86_64
kernel-devel-4.18.0-269.el8.x86_64
kernel-headers-4.18.0-269.el8.x86_64
kernel-modules-4.18.0-269.el8.x86_64
kernel-modules-extra-4.18.0-269.el8.x86_64
kernel-tools-4.18.0-269.el8.x86_64
kernel-tools-libs-4.18.0-269.el8.x86_64
kmod-kvdo-6.2.4.26-76.el8.x86_64
lvm2-2.03.11-0.4.20201222gitb84a992.el8.x86_64
lvm2-libs-2.03.11-0.4.20201222gitb84a992.el8.x86_64
lvm2-lockd-2.03.11-0.4.20201222gitb84a992.el8.x86_64
vdo-6.2.4.14-14.el8.x86_64



# lvmconfig | grep vdo_pool_autoextend
	vdo_pool_autoextend_threshold=70  # vdo_pool_autoextend_percent is default (20%)
# lvcreate --vdo --size 10G --name lvol0 vg1
    Logical blocks defaulted to 1569686 blocks.
    The VDO volume can address 6 GB in 3 data slabs, each 2 GB.
    It can grow to address at most 16 TB of physical storage in 8192 slabs.
    If a larger maximum size might be needed, use bigger slabs.
  Logical volume "lvol0" created.
# mkfs.ext4 -E nodiscard /dev/vg1/lvol0
mke2fs 1.45.6 (20-Mar-2020)
Creating filesystem with 1568768 4k blocks and 392448 inodes
Filesystem UUID: 2f3d6a68-cb39-4617-9ef4-88c3327cd4a0
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done 

# mkdir /mnt/lvol0
# mount /dev/vg1/lvol0 /mnt/lvol0
# lvs -a
  LV             VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root           rhel_virt-047 -wi-ao----  <6.20g                                                      
  swap           rhel_virt-047 -wi-ao---- 820.00m                                                      
  lvol0          vg1           vwi-aov---   5.98g vpool0        0.01                                   
  vpool0         vg1           dwi-------  10.00g               40.05                                  
  [vpool0_vdata] vg1           Dwi-ao----  10.00g                                                      
# head -c 4G < /dev/urandom > /mnt/lvol0/random_data.bin
# lvs -a
  LV             VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root           rhel_virt-047 -wi-ao----  <6.20g                                                      
  swap           rhel_virt-047 -wi-ao---- 820.00m                                                      
  lvol0          vg1           vwi-aov---   5.98g vpool0        64.24                                  
  vpool0         vg1           dwi-------  12.00g               65.45                                  
  [vpool0_vdata] vg1           Dwi-ao----  12.00g                                                      
# head -c 1G < /dev/urandom > /mnt/lvol0/random_data.bin
# lvs -a
  LV             VG            Attr       LSize   Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root           rhel_virt-047 -wi-ao----  <6.20g                                                      
  swap           rhel_virt-047 -wi-ao---- 820.00m                                                      
  lvol0          vg1           vwi-aov---   5.98g vpool0        83.56                                  
  vpool0         vg1           dwi-------  14.40g               65.38                                  
  [vpool0_vdata] vg1           Dwi-ao----  14.40g

Comment 14 errata-xmlrpc 2021-05-18 15:01:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (lvm2 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:1659


Note You need to log in before you can comment on or make changes to this bug.