Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionCorey Marthaler
2016-01-26 18:23:27 UTC
Description of problem:
This may end up not being a bug and instead just a simple change in behavior, but filing a BZ anyways...
This "double auto extend" test case has behaved the same way all through 6.* and 7.*. Now it's different in just 6.8. When writing enough data to cause two auto extends, so 1.0g -> 1.2g -> 1.44g, it now does an extend to 1.26g, and then never needs the second extend since the threshold isn't tripped over 70% at 1.26g. Was there new logic added to bump up the extend size (even with a set 20% threshold percent) if I/O is continuing to fill the pool? The same behavior was changed in both the thin_pool_autoextend_threshold and snapshot_autoextend_threshold.
[root@host-114 ~]# grep thin_pool_autoextend_threshold /etc/lvm/lvm.conf
# Configuration option activation/thin_pool_autoextend_threshold.
# thin_pool_autoextend_threshold = 70
thin_pool_autoextend_threshold = 70
[root@host-114 ~]# grep thin_pool_autoextend_percent /etc/lvm/lvm.conf
# Also see thin_pool_autoextend_percent.
# Configuration option activation/thin_pool_autoextend_percent.
# thin_pool_autoextend_percent = 20
thin_pool_autoextend_percent = 20
# FIRST AUTO EXTENSION (works as expected)
[root@host-114 ~]# lvcreate --thinpool POOL --zero n -L 1G --poolmetadatasize 4M snapper_thinp
Logical volume "POOL" created.
[root@host-114 ~]# lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin
Logical volume "origin" created.
[root@host-114 ~]# lvcreate -k n -s /dev/snapper_thinp/origin -n double_auto_extension
Logical volume "double_auto_extension" created.
[root@host-114 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Devices
POOL snapper_thinp twi-aot--- 1.00g 0.01 1.07 POOL_tdata(0)
[POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sda1(1)
[POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0)
double_auto_extension snapper_thinp Vwi-a-t--- 1.00g POOL origin 0.01
[lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sda1(0)
origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
[root@host-114 ~]# dd if=/dev/zero of=/dev/snapper_thinp/double_auto_extension bs=1M count=750
750+0 records in
750+0 records out
786432000 bytes (786 MB) copied, 5.71772 s, 138 MB/s
Jan 25 17:51:45 host-114 lvm[3923]: Size of logical volume snapper_thinp/POOL_tdata changed from 1.00 GiB (256 extents) to 1.20 GiB (308 extents).
Jan 25 17:51:45 host-114 kernel: device-mapper: thin: Data device (dm-3) discard unsupported: Disabling discard passdown.device-mapper: thin: 253:4: growing the data device from 16384 to 19712 blocks
Jan 25 17:51:45 host-114 lvm[3923]: Logical volume POOL successfully resized.
Jan 25 17:51:45 host-114 kernel: device-mapper: thin: 253:4: growing the data device from 16384 to 19712 blocks
[root@host-114 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Devices
POOL snapper_thinp twi-aot--- 1.20g 60.88 10.45 POOL_tdata(0)
[POOL_tdata] snapper_thinp Twi-ao---- 1.20g /dev/sda1(1)
[POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0)
double_auto_extension snapper_thinp Vwi-a-t--- 1.00g POOL origin 73.24
[lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sda1(0)
origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
# SECOND AUTO EXTENSION (works as expected)
[root@host-114 ~]# dd if=/dev/zero of=/dev/snapper_thinp/double_auto_extension bs=1M count=900
900+0 records in
900+0 records out
943718400 bytes (944 MB) copied, 7.87852 s, 120 MB/s
Jan 25 17:55:15 host-114 lvm[3923]: Size of logical volume snapper_thinp/POOL_tdata changed from 1.20 GiB (308 extents) to 1.45 GiB (370 extents).
Jan 25 17:55:15 host-114 kernel: device-mapper: thin: Data device (dm-3) discard unsupported: Disabling discard passdown.device-mapper: thin: 253:4: growing the data device from 19712 to 23680 blocks
Jan 25 17:55:15 host-114 kernel: device-mapper: thin: 253:4: growing the data device from 19712 to 23680 blocks
Jan 25 17:55:15 host-114 lvm[3923]: Logical volume POOL successfully resized.
[root@host-114 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Devices
POOL snapper_thinp twi-aot--- 1.45g 60.82 12.30 POOL_tdata(0)
[POOL_tdata] snapper_thinp Twi-ao---- 1.45g /dev/sda1(1)
[POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0)
double_auto_extension snapper_thinp Vwi-a-t--- 1.00g POOL origin 87.89
[lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sda1(0)
origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01
# RESET, AND ATTEMPT TO CAUSE TWO EXTENSIONS BACK TO BACK (no longer causes two separate extends, and instead extends a little more the first time to avoid the second extend?)
[root@host-115 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Devices
POOL snapper_thinp twi-aotz-- 1.00g 0.00 1.56 POOL_tdata(0)
[POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sdd1(1)
[POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdf1(0)
double_auto_extension snapper_thinp Vwi-a-tz-- 1.00g POOL origin 0.00
[lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdd1(0)
origin snapper_thinp Vwi-a-tz-- 1.00g POOL 0.00
[root@host-115 ~]# dd if=/dev/zero of=/dev/snapper_thinp/double_auto_extension bs=1M count=900
900+0 records in
900+0 records out
943718400 bytes (944 MB) copied, 18.2402 s, 51.7 MB/s
[root@host-115 ~]# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Devices
POOL snapper_thinp twi-aotz-- 1.26g 69.66 12.79 POOL_tdata(0)
[POOL_tdata] snapper_thinp Twi-ao---- 1.26g /dev/sdd1(1)
[POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdf1(0)
double_auto_extension snapper_thinp Vwi-a-tz-- 1.00g POOL origin 87.89
[lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdd1(0)
origin snapper_thinp Vwi-a-tz-- 1.00g POOL 0.00
Version-Release number of selected component (if applicable):
2.6.32-604.el6.x86_64
lvm2-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
lvm2-libs-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
lvm2-cluster-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
udev-147-2.66.el6 BUILT: Mon Jan 18 02:42:20 CST 2016
device-mapper-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-libs-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-event-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-event-libs-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
device-mapper-persistent-data-0.6.0-1.el6 BUILT: Wed Jan 20 11:23:29 CST 2016
cmirror-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016
How reproducible:
Everytime
So here I don't afraid to say: this is not a bug but a new feature
The extension is now more 'smart' and checks if the 'percent' jump will get resized volume below 'threshold' and eventually adapts to always fit BELOW threshold (yes, saving repeated invokes of lvextend)
The change is here to give more fluent reaction on quickly filling thin-pool or snapshot COW (both using same logic)
In fact it should be now mostly 'safe' to even use 1% jump in size if user wants to minimize overusage of VG space.
So yep - allocation sizes are now going to be slightly different - but we still operate within documented margins - the monitored LV is supposed to stay below given threshold if there is free space in VG.
Just a note here that I learned the behavior must depend on the speed of the dd I/O. If I run the the exact same script with the exact same volumes, same pool profile, same dd cmd, the auto extend is sometimes done in one "smart" extend and sometimes done in multiple extends. Adjusting test case to not care anymore...
Yep it's a nature of the resize calcs.
The percentage is evaluated on the still running device - so it largely depends how quickly you are able to fill thin device between this time-of-check and the actual time-of-use.
In the future for thin-pool we may evaluate some better strategy like suspending all users - but this still would need some more work on kernel side to avoid any deadlock scenarios.
Description of problem: This may end up not being a bug and instead just a simple change in behavior, but filing a BZ anyways... This "double auto extend" test case has behaved the same way all through 6.* and 7.*. Now it's different in just 6.8. When writing enough data to cause two auto extends, so 1.0g -> 1.2g -> 1.44g, it now does an extend to 1.26g, and then never needs the second extend since the threshold isn't tripped over 70% at 1.26g. Was there new logic added to bump up the extend size (even with a set 20% threshold percent) if I/O is continuing to fill the pool? The same behavior was changed in both the thin_pool_autoextend_threshold and snapshot_autoextend_threshold. [root@host-114 ~]# grep thin_pool_autoextend_threshold /etc/lvm/lvm.conf # Configuration option activation/thin_pool_autoextend_threshold. # thin_pool_autoextend_threshold = 70 thin_pool_autoextend_threshold = 70 [root@host-114 ~]# grep thin_pool_autoextend_percent /etc/lvm/lvm.conf # Also see thin_pool_autoextend_percent. # Configuration option activation/thin_pool_autoextend_percent. # thin_pool_autoextend_percent = 20 thin_pool_autoextend_percent = 20 # FIRST AUTO EXTENSION (works as expected) [root@host-114 ~]# lvcreate --thinpool POOL --zero n -L 1G --poolmetadatasize 4M snapper_thinp Logical volume "POOL" created. [root@host-114 ~]# lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin Logical volume "origin" created. [root@host-114 ~]# lvcreate -k n -s /dev/snapper_thinp/origin -n double_auto_extension Logical volume "double_auto_extension" created. [root@host-114 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 1.00g 0.01 1.07 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sda1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0) double_auto_extension snapper_thinp Vwi-a-t--- 1.00g POOL origin 0.01 [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sda1(0) origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 [root@host-114 ~]# dd if=/dev/zero of=/dev/snapper_thinp/double_auto_extension bs=1M count=750 750+0 records in 750+0 records out 786432000 bytes (786 MB) copied, 5.71772 s, 138 MB/s Jan 25 17:51:45 host-114 lvm[3923]: Size of logical volume snapper_thinp/POOL_tdata changed from 1.00 GiB (256 extents) to 1.20 GiB (308 extents). Jan 25 17:51:45 host-114 kernel: device-mapper: thin: Data device (dm-3) discard unsupported: Disabling discard passdown.device-mapper: thin: 253:4: growing the data device from 16384 to 19712 blocks Jan 25 17:51:45 host-114 lvm[3923]: Logical volume POOL successfully resized. Jan 25 17:51:45 host-114 kernel: device-mapper: thin: 253:4: growing the data device from 16384 to 19712 blocks [root@host-114 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 1.20g 60.88 10.45 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.20g /dev/sda1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0) double_auto_extension snapper_thinp Vwi-a-t--- 1.00g POOL origin 73.24 [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sda1(0) origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 # SECOND AUTO EXTENSION (works as expected) [root@host-114 ~]# dd if=/dev/zero of=/dev/snapper_thinp/double_auto_extension bs=1M count=900 900+0 records in 900+0 records out 943718400 bytes (944 MB) copied, 7.87852 s, 120 MB/s Jan 25 17:55:15 host-114 lvm[3923]: Size of logical volume snapper_thinp/POOL_tdata changed from 1.20 GiB (308 extents) to 1.45 GiB (370 extents). Jan 25 17:55:15 host-114 kernel: device-mapper: thin: Data device (dm-3) discard unsupported: Disabling discard passdown.device-mapper: thin: 253:4: growing the data device from 19712 to 23680 blocks Jan 25 17:55:15 host-114 kernel: device-mapper: thin: 253:4: growing the data device from 19712 to 23680 blocks Jan 25 17:55:15 host-114 lvm[3923]: Logical volume POOL successfully resized. [root@host-114 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aot--- 1.45g 60.82 12.30 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.45g /dev/sda1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdb1(0) double_auto_extension snapper_thinp Vwi-a-t--- 1.00g POOL origin 87.89 [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sda1(0) origin snapper_thinp Vwi-a-t--- 1.00g POOL 0.01 # RESET, AND ATTEMPT TO CAUSE TWO EXTENSIONS BACK TO BACK (no longer causes two separate extends, and instead extends a little more the first time to avoid the second extend?) [root@host-115 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aotz-- 1.00g 0.00 1.56 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.00g /dev/sdd1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdf1(0) double_auto_extension snapper_thinp Vwi-a-tz-- 1.00g POOL origin 0.00 [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdd1(0) origin snapper_thinp Vwi-a-tz-- 1.00g POOL 0.00 [root@host-115 ~]# dd if=/dev/zero of=/dev/snapper_thinp/double_auto_extension bs=1M count=900 900+0 records in 900+0 records out 943718400 bytes (944 MB) copied, 18.2402 s, 51.7 MB/s [root@host-115 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Devices POOL snapper_thinp twi-aotz-- 1.26g 69.66 12.79 POOL_tdata(0) [POOL_tdata] snapper_thinp Twi-ao---- 1.26g /dev/sdd1(1) [POOL_tmeta] snapper_thinp ewi-ao---- 4.00m /dev/sdf1(0) double_auto_extension snapper_thinp Vwi-a-tz-- 1.00g POOL origin 87.89 [lvol0_pmspare] snapper_thinp ewi------- 4.00m /dev/sdd1(0) origin snapper_thinp Vwi-a-tz-- 1.00g POOL 0.00 Version-Release number of selected component (if applicable): 2.6.32-604.el6.x86_64 lvm2-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 lvm2-libs-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 lvm2-cluster-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 udev-147-2.66.el6 BUILT: Mon Jan 18 02:42:20 CST 2016 device-mapper-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 device-mapper-libs-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 device-mapper-event-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 device-mapper-event-libs-1.02.114-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 device-mapper-persistent-data-0.6.0-1.el6 BUILT: Wed Jan 20 11:23:29 CST 2016 cmirror-2.02.140-3.el6 BUILT: Thu Jan 21 05:40:10 CST 2016 How reproducible: Everytime