Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
I'm seeing the same behaviour as https://www.redhat.com/archives/linux-lvm/2015-April/msg00017.html. I couldn't find an existing entry in bugzilla. Feel free to close this if it's already resolved.
I haven't looked in detail how auto extension for thinpool metadata works, but I saw that my thinpool metadata was more than 30% full. The thin pool was setup for /home during the OS installation with stock settings in the GUI installer. I distinctly remember that there wasn't an option to specify the metadata size. So I'm hoping that it chose the small size with some auto-extension enabled.
Anyway, the thinpool metadata size was set to 76M, and since it was getting full pretty fast, I extended it using lvextend. Now there is a disparity in the sizes of metadata and pmspare.
lvs -a -oname,attr,size rhel_foobar
LV Attr LSize
home Vwi-aotz-- 9.02t
home-snapshot1 Vri---tz-k 9.02t
[lvol0_pmspare] ewi------- 76.00m
pool00 twi-aotz-- 9.07t
[pool00_tdata] Twi-ao---- 9.07t
[pool00_tmeta] ewi-ao---- 304.00m
root Vwi-aotz-- 50.00g
swap -wi-ao---- 7.88g
vgdisplay rhel_foobar
--- Volume group ---
VG Name rhel_foobar
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 13
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 5
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 9.10 TiB
PE Size 4.00 MiB
Total PE 2384258
Alloc PE / Size 2380246 / 9.08 TiB
Free PE / Size 4012 / 15.67 GiB
VG UUID Nm1pgt-K8EJ-lHaW-tGZj-eczO-NuH5-9I5FEb
The underlying file-system is xfs. Is this a problem ? Are there any remedial steps that I can take ?
Version-Release number of selected component (if applicable):
How reproducible:
Steps to Reproduce:
1. lvextend -L+152M rhel_foobar/pool00_tmeta
Actual results:
1. Only metadata is extended
Expected results:
pmspare should also be extended ? Or some warning/documentation should be provided if this is not a safe step
Additional info:
I just had a look at /etc/lvm/lvm.conf, and I noticed thin_pool_autoextend_threshold = 100. So the metadata was never going to extend on its own ? That's puzzling. Why does the OS installer such a low value then ?
(In reply to prasun.gera from comment #2)
> I just had a look at /etc/lvm/lvm.conf, and I noticed
> thin_pool_autoextend_threshold = 100. So the metadata was never going to
> extend on its own ? That's puzzling. Why does the OS installer such a low
> value then ?
lvm2 default configuration is set to NOT auto-extend any LV - as we do not know what for is the free space VG
(same as with old 'thick' snapshots).
So it's always on the machine administrator to decide how can 'eat' space in VG.
Note: this allocation policy should be probably much more advanced like setting 'max' size the pool can grow to - but that's probably a big can a worms.
Anyway - ATM if you want the thin-pool to auto-extend - you need to set threshold to lower value then 100% - from this moment (when monitoring is enabled) thin-pool can 'eat' free space in your VG.
I also believe current version of lvm2 is maintaining size of _pmspare much better and the _pmspace should be upsized to biggest metadata LV in VG (for recovery purpose)