Bug 1610260
| Summary: | Grow metadata automatically when thin-pool data size grows | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | nikhil kshirsagar <nkshirsa> |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> |
| lvm2 sub component: | Thin Provisioning | QA Contact: | cluster-qe <cluster-qe> |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | medium | ||
| Priority: | unspecified | CC: | agk, cmarthal, heinzm, jbrassow, loberman, mcsontos, msnitzer, prajnoha, rhandlin, thornber, zkabelac |
| Version: | 7.6 | ||
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | lvm2-2.02.185-1.el7 | Doc Type: | If docs needed, set a value |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-08-06 13:10:41 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1577173 | ||
|
Description
nikhil kshirsagar
2018-07-31 10:22:30 UTC
Add patch to automatically increase the size of metadata LV when pool is resized. https://www.redhat.com/archives/lvm-devel/2019-April/msg00003.html with 2 extra patches: https://www.redhat.com/archives/lvm-devel/2019-April/msg00004.html https://www.redhat.com/archives/lvm-devel/2019-April/msg00002.html So no warning will be printed - pool will automatically adapt metadata size to match bigger data. *** Bug 1637121 has been marked as a duplicate of this bug. *** Backported: https://www.redhat.com/archives/lvm-devel/2019-April/msg00074.html https://www.redhat.com/archives/lvm-devel/2019-April/msg00075.html https://www.redhat.com/archives/lvm-devel/2019-April/msg00076.html https://www.redhat.com/archives/lvm-devel/2019-April/msg00077.html What is the extend size threshold that triggers the automatic meta extend for this bug fix? Also, "_tdata" devices have never been able to be resized directly. [root@hayes-02 ~]# lvresize -L +46M /dev/snapper_thinp/POOL_tdata Can't resize internal logical volume snapper_thinp/POOL_tdata. [root@hayes-02 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 2.00g 0.53 11.72 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 2.00g /dev/sdn1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdo1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdn1(0) meta_resize snapper_thinp Vwi-a-t--- 1.00g POOL origin 1.04 origin snapper_thinp Vwi-aot--- 1.00g POOL 1.04 other1 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 # No auto meta extension after manual resize [root@hayes-02 ~]# lvextend -L +46M /dev/snapper_thinp/POOL Rounding size to boundary between physical extents: 48.00 MiB. WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (<2.05 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/POOL_tdata changed from 2.00 GiB (512 extents) to <2.05 GiB (524 extents). Logical volume snapper_thinp/POOL_tdata successfully resized. [root@hayes-02 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- <2.05g 0.52 11.72 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- <2.05g /dev/sdn1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdo1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdn1(0) meta_resize snapper_thinp Vwi-a-t--- 1.00g POOL origin 1.04 origin snapper_thinp Vwi-aot--- 1.00g POOL 1.04 other1 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 # No auto meta extension after manual resize [root@hayes-02 ~]# lvextend -L +46M /dev/snapper_thinp/POOL Rounding size to boundary between physical extents: 48.00 MiB. WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (2.09 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/POOL_tdata changed from <2.05 GiB (524 extents) to 2.09 GiB (536 extents). Logical volume snapper_thinp/POOL_tdata successfully resized. [root@hayes-02 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 2.09g 0.50 11.72 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 2.09g /dev/sdn1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdo1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdn1(0) meta_resize snapper_thinp Vwi-a-t--- 1.00g POOL origin 1.04 origin snapper_thinp Vwi-aot--- 1.00g POOL 1.04 other1 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 # No auto meta extension after manual resize [root@hayes-02 ~]# lvextend -L +1G /dev/snapper_thinp/POOL WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (3.09 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/POOL_tdata changed from 2.09 GiB (536 extents) to 3.09 GiB (792 extents). Logical volume snapper_thinp/POOL_tdata successfully resized. [root@hayes-02 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 3.09g 0.34 11.82 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 3.09g /dev/sdn1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdo1(0) [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdn1(0) meta_resize snapper_thinp Vwi-a-t--- 1.00g POOL origin 1.04 origin snapper_thinp Vwi-aot--- 1.00g POOL 1.04 other1 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.00 # Here it finally gets auto extended after a manual pool extend [root@hayes-02 ~]# lvextend -L +10G /dev/snapper_thinp/POOL Rounding size to boundary between physical extents: 16.00 MiB. WARNING: Sum of all thin volume sizes (7.00 GiB) exceeds the size of thin pools (3.09 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/POOL_tmeta changed from 4.00 MiB (1 extents) to 16.00 MiB (4 extents). Size of logical volume snapper_thinp/POOL_tdata changed from 3.09 GiB (792 extents) to 13.09 GiB (3352 extents). To add to the confusion see this comment in bug dup'ed to this one: https://bugzilla.redhat.com/show_bug.cgi?id=1637121#c7 The qa ack for this bug was granted for a new "WARNING" message, then only after reading the final comment do I see that that is no longer the case. Please edit the subject of this bug to reflect what this bug is about and provide what needs to be verified. I've updated title of bug to better match the final solution. As for comment 8 - the growth of metadata size happens when it gets out-of-bounds for given pool data size. Since _tmeta size is always rounded to 'extent size' so it may look like the jump of 'data' size is quite big before the new 'resize' point for metadata is met. An extra note could be - when PV list is specified - only given LV grows (data or metadata). Also yes - syntax sugar was added and lvm2 automatically grown thin-pool size with _tdata is specified for resize (in low-level technical detail - it detects thin-pool data resize and makes it a resize thin-pool operation). A really super-low detail how lvm2 calculates the chunk_size: * nr_pool_blocks = pool_data_size / pool_metadata_size * chunk_size = nr_pool_blocks * 64bytes / 512bytes And reverted method can be used for estimation of 'optimal' metadata size with already given chunk_size. Do we have any idea how big customer pool _tdata devices might be? I assume much larger than the 1G one in my example, so I'll mark this bug verified with the caveat that the user needs to grow a pool *enough* to trigger the automatic metadata resize, or else the metadata device will actually end up more full after a small resize. 3.10.0-1057.el7.x86_64 lvm2-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-libs-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-cluster-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-lockd-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 lvm2-python-boom-0.9-18.el7 BUILT: Fri Jun 21 04:18:58 CDT 2019 cmirror-2.02.185-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-event-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-event-libs-1.02.158-2.el7 BUILT: Fri Jun 21 04:18:48 CDT 2019 device-mapper-persistent-data-0.8.5-1.el7 BUILT: Mon Jun 10 03:58:20 CDT 2019 # Thin Pool _tdata is 1g and _tmeta is 4m and *full* [root@hayes-01 ~]# lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_636 WARNING: Remaining free space in metadata of thin pool snapper_thinp/POOL is too low (75.00% >= 75.00%). Resize is recommended. Cannot create new thin volume, free space in thin pool snapper_thinp/POOL reached threshold. # User grows thin-pool data and would presumably expect the metadata to grow, but it doesn't [root@hayes-01 ~]# lvextend -L +1G /dev/snapper_thinp/POOL WARNING: Sum of all thin volume sizes (641.00 GiB) exceeds the size of thin pools (4.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/POOL_tdata changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents). Logical volume snapper_thinp/POOL_tdata successfully resized. # In reality, this makes the _tmeta even more full [root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL [...] LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices POOL snapper_thinp twi-aotz-- 2.00g 1.94 75.10 POOL_tdata(0) [root@hayes-01 ~]# lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_636 WARNING: Remaining free space in metadata of thin pool snapper_thinp/POOL is too low (75.10% >= 75.00%). Resize is recommended. Cannot create new thin volume, free space in thin pool snapper_thinp/POOL reached threshold. # User again grows thin-pool data and would presumably expect the metadata to grow. [root@hayes-01 ~]# lvextend -L +2G /dev/snapper_thinp/POOL WARNING: Sum of all thin volume sizes (641.00 GiB) exceeds the size of thin pools (4.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Size of logical volume snapper_thinp/POOL_tdata changed from 2.00 GiB (512 extents) to 4.00 GiB (1024 extents). Logical volume snapper_thinp/POOL_tdata successfully resized. # Again, this makes the _tmeta more more full [root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL [...] LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices POOL snapper_thinp twi-aotz-- 4.00g 0.97 75.29 POOL_tdata(0) [root@hayes-01 ~]# lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_636 WARNING: Remaining free space in metadata of thin pool snapper_thinp/POOL is too low (75.29% >= 75.00%). Resize is recommended. Cannot create new thin volume, free space in thin pool snapper_thinp/POOL reached threshold. # Granted if the user does this enough (or knows to grow to 10G) the meta device will eventually be auto resized and they can again create devices, etc. [root@hayes-01 ~]# lvextend -L +2G /dev/snapper_thinp/POOL [root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL_tmeta LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices [POOL_tmeta] snapper_thinp ewi-ao---- 8.00m /dev/sdk1(0) [root@hayes-01 ~]# lvs -a -o +devices snapper_thinp/POOL LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices POOL snapper_thinp twi-aotz-- 6.00g 0.65 42.72 POOL_tdata(0) [root@hayes-01 ~]# lvcreate -y -k n -s /dev/snapper_thinp/origin -n many_636 WARNING: Sum of all thin volume sizes (642.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (6.00 GiB). WARNING: You have not turned on protection against thin pools running out of space. WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full. Logical volume "many_636" created. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2253 *** Bug 1676359 has been marked as a duplicate of this bug. *** *** Bug 1676360 has been marked as a duplicate of this bug. *** |