Red Hat Bugzilla – Bug 1278515
Attempt to create thin|cache pool larger than possible leaves auxiliary volume behind
Last modified: 2018-04-10 11:19:55 EDT
Cloning this closed fedora bug since this issue still exists in RHEL. lvcreate -L 2G --thinpool snapper_thinp/over_size_pool Volume group "snapper_thinp" has insufficient free space (119 extents): 512 required. [root@host-109 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices lvol0 snapper_thinp -wi------- 4.00m /dev/sda1(0) 3.10.0-327.el7.x86_64 lvm2-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 lvm2-libs-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 lvm2-cluster-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-libs-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-event-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-event-libs-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-persistent-data-0.5.5-1.el7 BUILT: Thu Aug 13 09:58:10 CDT 2015 cmirror-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 sanlock-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 sanlock-lib-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 lvm2-lockd-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 +++ This bug was initially created as a clone of Bug #1124799 +++ Description of problem: If creation of thinpool requests a size that is slightly more than available, it fails to create the thinpool (which is expected) but leaves a new unwanted volume lvol0. Version-Release number of selected component (if applicable): lvm2-2.02.106-1.fc20.x86_64 How reproducible: always Steps to Reproduce: # my vg has only 1444 extents available lvcreate -T fedora_unused-4-144/mypool -l 1500 Actual results: Logical volume "lvol0" created Volume group "fedora_unused-4-144" has insufficient free space (1474 extents): 1500 required. Output of the same command with -vvvv attached. And the lvol0 is indeed there with size 8m. Expected results: If the command failed it shouldn't leave auxiliary volumes behind. --- Additional comment from Corey Marthaler on 2014-07-30 12:05:19 EDT --- Reproduced this issue and will add a regression test case to the thin suite. It appears that you don't need just a "slightly" larger create attempt. I reproduced this with an attempt that was 5X the size of available space in the VG. [root@host-026 ~]# vgs VG #PV #LV #SN Attr VSize VFree VG 4 0 0 wz--n- 29.98g 29.98g [root@host-026 ~]# pvscan PV /dev/sda1 VG VG lvm2 [7.50 GiB / 7.50 GiB free] PV /dev/sdb1 VG VG lvm2 [7.50 GiB / 7.50 GiB free] PV /dev/sdc1 VG VG lvm2 [7.50 GiB / 7.50 GiB free] PV /dev/sdd1 VG VG lvm2 [7.50 GiB / 7.50 GiB free] [root@host-026 ~]# vgs VG #PV #LV #SN Attr VSize VFree VG 4 0 0 wz--n- 29.98g 29.98g [root@host-026 ~]# lvcreate -T VG/mypool -L 150G Logical volume "lvol0" created Volume group "VG" has insufficient free space (7657 extents): 38400 required. [root@host-026 ~]# echo $? 5 [root@host-026 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Devices lvol0 VG -wi------- 76.00m /dev/sda1(0) 2.6.32-485.el6.x86_64 lvm2-2.02.108-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 lvm2-libs-2.02.108-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 lvm2-cluster-2.02.108-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 udev-147-2.57.el6 BUILT: Thu Jul 24 08:48:47 CDT 2014 device-mapper-1.02.87-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 device-mapper-libs-1.02.87-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 device-mapper-event-1.02.87-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 device-mapper-event-libs-1.02.87-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 08:43:06 CDT 2014 cmirror-2.02.108-1.el6 BUILT: Thu Jul 24 10:29:50 CDT 2014 --- Additional comment from Fedora End Of Life on 2015-05-29 08:30:58 EDT --- This message is a reminder that Fedora 20 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 20. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '20'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 20 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. --- Additional comment from Fedora End Of Life on 2015-06-29 21:06:17 EDT --- Fedora 20 changed to end-of-life (EOL) status on 2015-06-23. Fedora 20 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.
Same thing happens if attempted w/ cache pools. [root@host-109 ~]# vgs VG #PV #LV #SN Attr VSize VFree cache_sanity 4 0 0 wz--n- 384.00m 384.00m [root@host-109 ~]# lvcreate -L 2G --type cache-pool cache_sanity/oversized Volume group "cache_sanity" has insufficient free space (94 extents): 512 required. [root@host-109 ~]# lvs -a -o +devices LV VG Attr LSize Pool Origin Data% Devices lvol0 cache_sanity -wi------- 8.00m /dev/sda1(0) 3.10.0-327.el7.x86_64 lvm2-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 lvm2-libs-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 lvm2-cluster-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-libs-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-event-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-event-libs-1.02.107-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 device-mapper-persistent-data-0.5.5-1.el7 BUILT: Thu Aug 13 09:58:10 CDT 2015 cmirror-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015 sanlock-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 sanlock-lib-3.2.4-1.el7 BUILT: Fri Jun 19 12:48:49 CDT 2015 lvm2-lockd-2.02.130-5.el7 BUILT: Wed Oct 14 08:27:29 CDT 2015
Adding quick note that this is still present in latest 7.3 rpms. [root@host-082 ~]# pvcreate --setphysicalvolumesize 100M /dev/sd[abcdefgh]1 [root@host-082 ~]# vgcreate test /dev/sd[abcdefgh]1 [root@host-082 ~]# pvscan PV /dev/sda1 VG test lvm2 [96.00 MiB / 96.00 MiB free] PV /dev/sdb1 VG test lvm2 [96.00 MiB / 96.00 MiB free] PV /dev/sdc1 VG test lvm2 [96.00 MiB / 96.00 MiB free] PV /dev/sdd1 VG test lvm2 [96.00 MiB / 96.00 MiB free] PV /dev/sde1 VG test lvm2 [96.00 MiB / 96.00 MiB free] PV /dev/sdf1 VG test lvm2 [96.00 MiB / 96.00 MiB free] PV /dev/sdg1 VG test lvm2 [96.00 MiB / 96.00 MiB free] PV /dev/sdh1 VG test lvm2 [96.00 MiB / 96.00 MiB free] [root@host-082 ~]# lvcreate -L 2G --thinpool test/over_size_pool Volume group "test" has insufficient free space (191 extents): 512 required. [root@host-082 ~]# lvs -a -o +devices LV VG Attr LSize Devices [lvol0_pmspare] test ewi------- 4.00m /dev/sda1(0) [root@host-082 ~]# vgremove --yes test Assertion failed: can't _pv_write non-orphan PV (in VG ) Failed to remove physical volume "/dev/sda1" from volume group "test" Volume group "test" not properly removed 3.10.0-480.el7.x86_64 lvm2-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 lvm2-libs-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 lvm2-cluster-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-libs-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-event-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-event-libs-1.02.131-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 cmirror-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016 sanlock-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 sanlock-lib-3.4.0-1.el7 BUILT: Fri Jun 10 11:41:03 CDT 2016 lvm2-lockd-2.02.161-3.el7 BUILT: Thu Jul 28 09:31:24 CDT 2016
This behavior is unwanted and odd - requiring explicit removal of an LV that spontaneously appears during an error. However, the fix is a bit more involved than can be justified at this stage in the release. I am moving this to 7.4.
ran out of time to fix this unlikely bug. does not meet ex/bl criteria. pushing to 7.5
lvm2 will clear created spare LV in case creation of pool LV failed: https://www.redhat.com/archives/lvm-devel/2017-October/msg00091.html
*** Bug 1353993 has been marked as a duplicate of this bug. ***
Verified with latest rpms. attempt_oversized_thinpool_create: https://beaker.cluster-qe.lab.eng.brq.redhat.com/logs/2017/11/732/73262/237206/633844/TESTOUT.log attempt_oversized_cachepool_create: https://beaker.cluster-qe.lab.eng.brq.redhat.com/logs/2017/11/732/73261/237203/633835/TESTOUT.log 3.10.0-768.el7.x86_64 lvm2-2.02.176-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017 lvm2-libs-2.02.176-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017 lvm2-cluster-2.02.176-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017 device-mapper-1.02.145-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017 device-mapper-libs-1.02.145-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017 device-mapper-event-1.02.145-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017 device-mapper-event-libs-1.02.145-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017 device-mapper-persistent-data-0.7.3-2.el7 BUILT: Tue Oct 10 11:00:07 CEST 2017 cmirror-2.02.176-2.el7 BUILT: Fri Nov 3 13:46:53 CET 2017
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0853