RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1610260 - Grow metadata automatically when thin-pool data size grows
Summary: Grow metadata automatically when thin-pool data size grows
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.6
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1637121 1676359 1676360 (view as bug list)
Depends On:
Blocks: 1577173
TreeView+ depends on / blocked
 
Reported: 2018-07-31 10:22 UTC by nikhil kshirsagar
Modified: 2021-09-09 15:14 UTC (History)
11 users (show)

Fixed In Version: lvm2-2.02.185-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-06 13:10:41 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:2253 0 None None None 2019-08-06 13:11:12 UTC

Description nikhil kshirsagar 2018-07-31 10:22:30 UTC
Description of problem:
Many times customers extend manually thinpool data, but forget to extend metadata. So we should add WARNING if user manually resizes  'data' of thinpool,  that he should also grow metadata.



Expected results:


Additional info:
<nkshirsa> zkabelac,  Increasing a thinpool data without increasing the metadata accordingly is a serious problem in many cases. why can't we have a RFe to increase metadata accordingly as soon as thin pool is extended? customers forget to increase metadata when they increase thinpools manually, causing metadata full issues eventually.. 

<zkabelac> nkshirsa: 'normal use-case' is with monitoring - thus the size grows according needs
<nkshirsa> zkabelac, many many many custs have that disabled, esp autoextend etc
<zkabelac> nkshirsa: but we possibly may add WARNING if user manually resizes  'data'  that he should also grow metadata
<nkshirsa> yes please zkabelac 
<nkshirsa> can i file that rfe ?
<zkabelac> y

Comment 3 Zdenek Kabelac 2019-04-03 14:34:46 UTC
Add patch to automatically increase the size of metadata LV when pool is resized.

https://www.redhat.com/archives/lvm-devel/2019-April/msg00003.html

with 2 extra patches:

https://www.redhat.com/archives/lvm-devel/2019-April/msg00004.html
https://www.redhat.com/archives/lvm-devel/2019-April/msg00002.html


So no warning will be printed - pool will automatically adapt metadata size to match bigger data.

Comment 4 Zdenek Kabelac 2019-04-03 14:39:12 UTC
*** Bug 1637121 has been marked as a duplicate of this bug. ***

Comment 8 Corey Marthaler 2019-06-04 17:57:43 UTC
What is the extend size threshold that triggers the automatic meta extend for this bug fix? Also, "_tdata" devices have never been able to be resized directly. 

[root@hayes-02 ~]# lvresize -L +46M /dev/snapper_thinp/POOL_tdata
  Can't resize internal logical volume snapper_thinp/POOL_tdata.

[root@hayes-02 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Devices      
  POOL            snapper_thinp twi-aot--- 2.00g             0.53   11.72  POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 2.00g                           /dev/sdn1(1) 
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                           /dev/sdo1(0) 
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                           /dev/sdn1(0) 
  meta_resize     snapper_thinp Vwi-a-t--- 1.00g POOL origin 1.04
  origin          snapper_thinp Vwi-aot--- 1.00g POOL        1.04
  other1          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other2          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other3          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other4          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other5          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00

# No auto meta extension after manual resize
[root@hayes-02 ~]# lvextend -L +46M /dev/snapper_thinp/POOL
  Rounding size to boundary between physical extents: 48.00 MiB.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (<2.05 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Size of logical volume snapper_thinp/POOL_tdata changed from 2.00 GiB (512 extents) to <2.05 GiB (524 extents).
  Logical volume snapper_thinp/POOL_tdata successfully resized.
[root@hayes-02 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize  Pool Origin Data%  Meta%  Devices      
  POOL            snapper_thinp twi-aot--- <2.05g             0.52   11.72  POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- <2.05g                           /dev/sdn1(1) 
  [POOL_tmeta]    snapper_thinp ewi-ao----  4.00m                           /dev/sdo1(0) 
  [lvol0_pmspare] snapper_thinp ewi-------  4.00m                           /dev/sdn1(0) 
  meta_resize     snapper_thinp Vwi-a-t---  1.00g POOL origin 1.04
  origin          snapper_thinp Vwi-aot---  1.00g POOL        1.04
  other1          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other2          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other3          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other4          snapper_thinp Vwi-a-t---  1.00g POOL        0.00
  other5          snapper_thinp Vwi-a-t---  1.00g POOL        0.00

# No auto meta extension after manual resize
[root@hayes-02 ~]# lvextend -L +46M /dev/snapper_thinp/POOL
  Rounding size to boundary between physical extents: 48.00 MiB.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (2.09 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Size of logical volume snapper_thinp/POOL_tdata changed from <2.05 GiB (524 extents) to 2.09 GiB (536 extents).
  Logical volume snapper_thinp/POOL_tdata successfully resized.
[root@hayes-02 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Devices      
  POOL            snapper_thinp twi-aot--- 2.09g             0.50   11.72  POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 2.09g                           /dev/sdn1(1) 
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                           /dev/sdo1(0) 
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                           /dev/sdn1(0) 
  meta_resize     snapper_thinp Vwi-a-t--- 1.00g POOL origin 1.04
  origin          snapper_thinp Vwi-aot--- 1.00g POOL        1.04
  other1          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other2          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other3          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other4          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other5          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00

# No auto meta extension after manual resize
[root@hayes-02 ~]# lvextend -L +1G /dev/snapper_thinp/POOL
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (3.09 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Size of logical volume snapper_thinp/POOL_tdata changed from 2.09 GiB (536 extents) to 3.09 GiB (792 extents).
  Logical volume snapper_thinp/POOL_tdata successfully resized.
[root@hayes-02 ~]# lvs -a -o +devices
  LV              VG            Attr       LSize Pool Origin Data%  Meta%  Devices      
  POOL            snapper_thinp twi-aot--- 3.09g             0.34   11.82  POOL_tdata(0)
  [POOL_tdata]    snapper_thinp Twi-ao---- 3.09g                           /dev/sdn1(1) 
  [POOL_tmeta]    snapper_thinp ewi-ao---- 4.00m                           /dev/sdo1(0) 
  [lvol0_pmspare] snapper_thinp ewi------- 4.00m                           /dev/sdn1(0) 
  meta_resize     snapper_thinp Vwi-a-t--- 1.00g POOL origin 1.04
  origin          snapper_thinp Vwi-aot--- 1.00g POOL        1.04
  other1          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other2          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other3          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other4          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00
  other5          snapper_thinp Vwi-a-t--- 1.00g POOL        0.00

# Here it finally gets auto extended after a manual pool extend
[root@hayes-02 ~]# lvextend -L +10G /dev/snapper_thinp/POOL
  Rounding size to boundary between physical extents: 16.00 MiB.
  WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (3.09 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Size of logical volume snapper_thinp/POOL_tmeta changed from 4.00 MiB (1 extents) to 16.00 MiB (4 extents).
  Size of logical volume snapper_thinp/POOL_tdata changed from 3.09 GiB (792 extents) to 13.09 GiB (3352 extents).

Comment 9 Corey Marthaler 2019-06-04 18:22:15 UTC
To add to the confusion see this comment in bug dup'ed to this one:

https://bugzilla.redhat.com/show_bug.cgi?id=1637121#c7

The qa ack for this bug was granted for a new "WARNING" message, then only after reading the final comment do I see that that is no longer the case. Please edit the subject of this bug to reflect what this bug is about and provide what needs to be verified.

Comment 10 Zdenek Kabelac 2019-06-05 11:04:01 UTC
I've updated title of bug to better match the final solution.

As for comment 8 - the growth of metadata size happens when it gets out-of-bounds for given pool data size.

Since _tmeta size is always rounded to 'extent size' so it may look like the jump of 'data' size is quite big before the new 'resize' point for metadata is met.

An extra note could be -  when PV list is specified - only given LV grows  (data or metadata).

Also yes - syntax sugar was added and lvm2 automatically grown thin-pool size with  _tdata is specified for resize (in low-level technical detail - it detects thin-pool data resize and makes it a resize thin-pool operation).


A really super-low detail how lvm2 calculates the chunk_size:

* nr_pool_blocks = pool_data_size / pool_metadata_size
* chunk_size = nr_pool_blocks * 64bytes / 512bytes

And reverted method can be used for estimation of 'optimal' metadata size with already given chunk_size.

Comment 11 Corey Marthaler 2019-06-25 21:41:03 UTC
Do we have any idea how big customer pool _tdata devices might be? I assume much larger than the 1G one in my example, so I'll mark this bug verified with the caveat that the user needs to grow a pool *enough* to trigger the automatic metadata resize, or else the metadata device will actually end up more full after a small resize.

3.10.0-1057.el7.x86_64

lvm2-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-libs-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-cluster-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-lockd-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
lvm2-python-boom-0.9-18.el7    BUILT: Fri Jun 21 04:18:58 CDT 2019
cmirror-2.02.185-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-event-libs-1.02.158-2.el7    BUILT: Fri Jun 21 04:18:48 CDT 2019
device-mapper-persistent-data-0.8.5-1.el7    BUILT: Mon Jun 10 03:58:20 CDT 2019



# Thin Pool _tdata is 1g and _tmeta is 4m and *full*
[root@hayes-01 ~]# lvcreate  -y -k n -s /dev/snapper_thinp/origin -n many_636
  WARNING: Remaining free space in metadata of thin pool snapper_thinp/POOL is too low (75.00% >= 75.00%). Resize is recommended.
  Cannot create new thin volume, free space in thin pool snapper_thinp/POOL reached threshold.

# User grows thin-pool data and would presumably expect the metadata to grow, but it doesn't
[root@hayes-01 ~]# lvextend -L +1G /dev/snapper_thinp/POOL
  WARNING: Sum of all thin volume sizes (641.00 GiB) exceeds the size of thin pools (4.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Size of logical volume snapper_thinp/POOL_tdata changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents).
  Logical volume snapper_thinp/POOL_tdata successfully resized.

# In reality, this makes the _tmeta even more full
[root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL
  [...]
  LV   VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices      
  POOL snapper_thinp twi-aotz-- 2.00g             1.94   75.10                            POOL_tdata(0)

[root@hayes-01 ~]# lvcreate  -y -k n -s /dev/snapper_thinp/origin -n many_636
  WARNING: Remaining free space in metadata of thin pool snapper_thinp/POOL is too low (75.10% >= 75.00%). Resize is recommended.
  Cannot create new thin volume, free space in thin pool snapper_thinp/POOL reached threshold.

# User again grows thin-pool data and would presumably expect the metadata to grow.
[root@hayes-01 ~]# lvextend -L +2G /dev/snapper_thinp/POOL
  WARNING: Sum of all thin volume sizes (641.00 GiB) exceeds the size of thin pools (4.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Size of logical volume snapper_thinp/POOL_tdata changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents).
  Logical volume snapper_thinp/POOL_tdata successfully resized.

# Again, this makes the _tmeta more more full
[root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL
  [...]
  LV   VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices      
  POOL snapper_thinp twi-aotz-- 4.00g             0.97   75.29                            POOL_tdata(0)

[root@hayes-01 ~]# lvcreate  -y -k n -s /dev/snapper_thinp/origin -n many_636
  WARNING: Remaining free space in metadata of thin pool snapper_thinp/POOL is too low (75.29% >= 75.00%). Resize is recommended.
  Cannot create new thin volume, free space in thin pool snapper_thinp/POOL reached threshold.

# Granted if the user does this enough (or knows to grow to 10G) the meta device will eventually be auto resized and they can again create devices, etc.
[root@hayes-01 ~]# lvextend -L +2G /dev/snapper_thinp/POOL

[root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL_tmeta
  LV           VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices     
  [POOL_tmeta] snapper_thinp ewi-ao---- 8.00m                                                     /dev/sdk1(0)
[root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL
  LV   VG            Attr       LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices      
  POOL snapper_thinp twi-aotz-- 6.00g             0.65   42.72                            POOL_tdata(0)

[root@hayes-01 ~]# lvcreate  -y -k n -s /dev/snapper_thinp/origin -n many_636
  WARNING: Sum of all thin volume sizes (642.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (6.00 GiB).
  WARNING: You have not turned on protection against thin pools running out of space.
  WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
  Logical volume "many_636" created.

Comment 13 errata-xmlrpc 2019-08-06 13:10:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2253

Comment 14 Zdenek Kabelac 2020-09-21 14:27:52 UTC
*** Bug 1676359 has been marked as a duplicate of this bug. ***

Comment 15 Zdenek Kabelac 2021-02-01 15:17:17 UTC
*** Bug 1676360 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.