RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1465974 - Improve warnings about available space in a thin pool
Summary: Improve warnings about available space in a thin pool
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.5
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: David Teigland
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-06-28 14:55 UTC by David Teigland
Modified: 2021-09-03 12:35 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-01-15 07:39:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description David Teigland 2017-06-28 14:55:36 UTC
Description of problem:

(Copying here from an old lvm etherpad page.)

Warnings related to thin pools.

Make these configurable by % level, where 0% disables warnings.
With these warnings in place, remove the warning about overprovisioning when thin LVs are created.

$ lvs dd
WARNING: thin pool "pool" is 75% full.
  LV    VG Attr       LSize   Pool 
  lv1   dd rwi---r---   1.00g      
  lv2   dd -wi-------   1.00g      
  lvol0 dd -wi-------   1.00g      
  lvol1 dd -wi-------   1.00g      
  pool  dd twi---tz--   1.00g      
  thin1 dd Vwi---tz-- 100.00g pool 
  
  
$ lvcreate -n thin2 -V100G --thinpool dd/pool
WARNING: thin pool "pool" is 75% full.
  Logical volume "thin2" created.


Examples of other kinds of warnings we could print when the %full level reaches the configured value.

$ lvs dd
WARNING: thin pool "pool" is 75% full.
WARNING: thin pool "pool" will be extended at 90% full.
WARNING: thin pool "test" is 81% full.
WARNING: thin pool "test" will require manual extension.
WARNING: VG "dd" has insufficient space for the next thin pool extensions.
WARNING: VG "dd" will require 20G of space for the next thin pool extensions, but 4GB is available.

Define soft_warn_percent setting for each pool.  At this percent full, commands using the VG will begin reporting warnings.  The soft_warn_percent would be some level below the point where the pool is actually extended.  By setting soft_warn_percent=0, the warnings are disabled.

In the example above, "pool" and "test" have soft_warn_percent=75, so when each reaches 75% full, commands begin printing:
- the current percent full of pools over soft_warn_percent, e.g. "... is 75% full."
- the percent full at which a pool will be autoextended, e.g. "... will be extended at 90% full."
- if autoextend is not enabled for the pool, it prints that it will require manual extension.
- if the VG does not have sufficient free space for the next extension of each thin pool, it prints that warning.

---

(I don't think reporting in terms of overprovisioning is very good; it's hard to understand what it means or if it's important.)

Another idea is to report the amount of overprovisioning in a VG.

$ lvs dd
WARNING: thin pool "pool" is 150% overprovisioned.

The actual %over that triggers the warning could be configurable for each pool, e.g. overprovisioning_warn_percent, with 0% meaning to disable the warning.
This message would require that we specifically define the meaning of overprovisioned percent.

Percents can get confusing, so it may be better to just report actual amounts of space that are overprovisioned, e.g.

$ lvs dd
WARNING: thin pool "pool" is 400GB overprovisioned.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 David Teigland 2017-06-28 15:10:37 UTC
The motivation for designing configurable and useful warnings like those above was to remove this warning when you create a thin LV:

"WARNING: Sum of all thin volume sizes (%s) exceeds the size of thin pool%s%s%s (%s)!",

Better reporting about thin pool usage in general would make that warning unnecessary.

A user provided good feedback about this same issue:
https://www.redhat.com/archives/linux-lvm/2016-April/msg00032.html

Comment 3 Alasdair Kergon 2017-07-06 16:35:15 UTC
Well step 1 is to define the fields that the warnings will use as their basic input as reporting fields.

So can we propose some new lvs -o fields that would provide the information required in the right form?

Step 2 is then to model the selection criteria for the warnings using -S.

Then that might provide us with a general framework:
  hook name (inserted at a given source location)
  -S condition to match
  warning message to issue

We then need to consider how to split the configurability between config file and VG metadata.  In the example given, is the warning threshold part of the VG metadata and/or part of lvm.conf?

So we probably need to work though more examples and see what sort of framework falls out.

Comment 4 Zdenek Kabelac 2018-08-22 08:21:48 UTC
Proposal seems to be  useful for a user with very few resources as there can be made a quick 'human mapping' between  WARNING line and actual problematic device.

However as the number of LVs in system grows up - such output may become confusing even more as the number of WARNING lines will tend to grow-up.

So it does seem we may need few things -  as comment 3  stated - it should have been accessible as some column field.

I also do believe we should start to use colorized output - as majority of today's admins simply do use colorized terminals (we are not in 1980 with black&white vt100 anymore :)) in IMHO it greatly enhances readability of i.e. journals...

I'd probably welcome even a new specific command designed just for reporting problems  (i.e.  lvproblems - or whatever better names comes)  - giving an individual verbose messaging about issues.

Supporting  -S with lvs  probably achieves same goal - but other tools seems to provide individual commands as well and it may be simpler - but just brainstorming....

For proposed new variables like overprovisioning_warn_percent,  soft_warn_percent,...   
There could some concept of  'PG-rating' being 0..100 so admins sets his experience and lvm2 will translate warning to some verbosity (I can imagine this very nicely with colorized output - where different LVs turns  white->orange->red at different occasions...)

We have several 'similar' issues:

We have old snapshots going out of space.
We have raid, mirrors, thin-pools, caches requiring fixing

So the choice is - adding many specific individual settings for every single instance (users do get lot in them very quickly) or think about some common pattern between them and control it via 'admin experience'.

There is various degree of information needed to provide qualified users response.

Comment 5 David Teigland 2018-08-22 17:47:29 UTC
(I think that output related to overprovision/percent which I included at the end of the original description is a bad idea.)


For now, I think we should just improve the warning message to be clearer and more useful.

When autoextend is disabled or monitoring is not used, then print:

  WARNING: thin pool "pool" is 75% full and will require manual extension.

When autoextend is enabled but there's not enough space in the VG to do it, then print:

  WARNING: thin pool "pool" is 75% full and the VG has insufficent space to autoextend.

Allow the 75% level to be set by the user in lvm.conf according to how early they want to be warned.

There is another warning that we might want to print about all thin pools collectively if there's not enough space to autoextend all of them:

WARNING: VG "test" will require 10G to autoextend all thin pools, only 5G is available.

Comment 7 RHEL Program Management 2021-01-15 07:39:01 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.