Red Hat Bugzilla – Bug 1189221
LVM should not default to thin_pool_autoextend_threshold = 100
Last modified: 2016-11-04 00:08:24 EDT
Description of problem: Using thinp can lead to exhaustion of data which leads to bad crashes. Version-Release number of selected component (if applicable): kernel-3.10.0-225.el7.drmupdate4.x86_64 xfsprogs-3.2.1-5.el7.x86_64 lvm2-libs-2.02.113-1.el7.x86_64 lvm2-2.02.113-1.el7.x86_64
Problem resolved slightly differently. Whenever user 'over-provisions' thin-pool space and auto extension is NOT configured, a WARNING about this state is shown during 'lvcreate' command. As long as there is 'enough' space in thin-pool for all thin volumes, autoextension (and associated dmeventd monitoring) is not needed. Applies from version lvm2 2.02.124 and never.
This bug was accidentally moved from POST to MODIFIED via an error in automation, please see mmccune@redhat.com with any questions
Verified that the warning mentioned in comment #2 is present when attempting to over provision with either lvextend or lvcreate and the auto extend threshold is turned off (set to 100). However, unlike what the subject states, the default does remain at 100. 3.10.0-419.el7.x86_64 lvm2-2.02.156-1.el7 BUILT: Mon Jun 13 03:05:51 CDT 2016 lvm2-libs-2.02.156-1.el7 BUILT: Mon Jun 13 03:05:51 CDT 2016 lvm2-cluster-2.02.156-1.el7 BUILT: Mon Jun 13 03:05:51 CDT 2016 device-mapper-1.02.126-1.el7 BUILT: Mon Jun 13 03:05:51 CDT 2016 device-mapper-libs-1.02.126-1.el7 BUILT: Mon Jun 13 03:05:51 CDT 2016 device-mapper-event-1.02.126-1.el7 BUILT: Mon Jun 13 03:05:51 CDT 2016 device-mapper-event-libs-1.02.126-1.el7 BUILT: Mon Jun 13 03:05:51 CDT 2016 device-mapper-persistent-data-0.6.2-0.1.rc8.el7 BUILT: Wed May 4 02:56:34 CDT 2016 [root@host-083 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 2.00g 0.02 1.66 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 2.00g /dev/sdh1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0) extend_snap snapper_thinp Vwi-a-t--- 1.20t POOL origin 0.00 [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdh1(0) origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other1 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 [root@host-083 ~]# lvcreate --virtualsize 1G -T snapper_thinp/POOL -n other6 WARNING: Sum of all thin volume sizes (1.21 TiB) exceeds the size of thin pool snapper_thinp/POOL and the size of whole volume group (124.96 GiB)! For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100. Logical volume "other6" created. [root@host-083 ~]# lvextend -l+1000000%FREE snapper_thinp/other1 WARNING: Sum of all thin volume sizes (1.17 PiB) exceeds the size of thin pool snapper_thinp/POOL and the size of whole volume group (124.96 GiB)! For thin pool auto extension activation/thin_pool_autoextend_threshold should be below 100. Size of logical volume snapper_thinp/other1 changed from 1.00 GiB (256 extents) to 1.17 PiB (314760256 extents). Logical volume other1 successfully resized. [root@host-083 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 2.00g 0.02 1.76 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 2.00g /dev/sdh1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0) extend_snap snapper_thinp Vwi-a-t--- 1.20t POOL origin 0.00 [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdh1(0) origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other1 snapper_thinp Vwi-a-t--- 1.17p POOL 0.00 other2 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other3 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other4 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other5 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 other6 snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-1445.html