Red Hat Bugzilla – Bug 921280
thin_pool_autoextend_threshold does not work when thin pool is a stacked raid device
Last modified: 2013-11-21 18:21:59 EST
Description of problem: The thin_pool_autoextend_threshold feature works fine when using a linear thin pool volume, however, when stacking the thin pool volume on top of a raid device, it does not. ./snapper_thinp -e verify_auto_extension_of_full_snap -t raid1 SCENARIO - [verify_auto_extension_of_full_snap] Create a thin snapshot and then fill it past the auto extend threshold Enabling thin_pool_autoextend_threshold Making origin volume Converting *Raid* volumes to thin pool and thin pool metadata devices lvcreate --type raid1 -m 1 -L 1G -n POOL snapper_thinp lvcreate --type raid1 -m 1 -L 1G -n meta snapper_thinp Waiting until all mirror|raid volumes become fully syncd... 0/2 mirror(s) are fully synced: ( 29.49% 21.01% ) 0/2 mirror(s) are fully synced: ( 50.51% 46.45% ) 0/2 mirror(s) are fully synced: ( 74.05% 68.01% ) 1/2 mirror(s) are fully synced: ( 100.00% 93.04% ) 2/2 mirror(s) are fully synced: ( 100.00% 100.00% ) lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n origin lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other1 lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other2 lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other3 lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other4 lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other5 Making snapshot of origin volume lvcreate -s /dev/snapper_thinp/origin -n auto_extension Filling snapshot /dev/snapper_thinp/auto_extension 720+0 records in 720+0 records out 754974720 bytes (755 MB) copied, 24.2149 s, 31.2 MB/s thin pool doesn't appear to have been extended to 1.20g [root@taft-02 ~]# pvscan PV /dev/sdd1 VG snapper_thinp lvm2 [135.66 GiB / 133.66 GiB free] PV /dev/sdh1 VG snapper_thinp lvm2 [135.66 GiB / 133.66 GiB free] PV /dev/sdf1 VG snapper_thinp lvm2 [135.66 GiB / 135.66 GiB free] PV /dev/sdc1 VG snapper_thinp lvm2 [135.66 GiB / 135.66 GiB free] PV /dev/sde1 VG snapper_thinp lvm2 [135.66 GiB / 135.66 GiB free] PV /dev/sdg1 VG snapper_thinp lvm2 [135.66 GiB / 135.66 GiB free] LV Attr LSize Pool Origin Data% Cpy%Sync Devices POOL twi-a-tz- 1.00g 70.31 POOL_tdata(0) [POOL_tdata] rwi-aot-- 1.00g 100.00 POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0) [POOL_tdata_rimage_0] iwi-aor-- 1.00g /dev/sdd1(1) [POOL_tdata_rimage_1] iwi-aor-- 1.00g /dev/sdh1(1) [POOL_tdata_rmeta_0] ewi-aor-- 4.00m /dev/sdd1(0) [POOL_tdata_rmeta_1] ewi-aor-- 4.00m /dev/sdh1(0) [POOL_tmeta] rwi-aot-- 1.00g 100.00 POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0) [POOL_tmeta_rimage_0] iwi-aor-- 1.00g /dev/sdd1(258) [POOL_tmeta_rimage_1] iwi-aor-- 1.00g /dev/sdh1(258) [POOL_tmeta_rmeta_0] ewi-aor-- 4.00m /dev/sdd1(257) [POOL_tmeta_rmeta_1] ewi-aor-- 4.00m /dev/sdh1(257) auto_extension Vwi-a-tz- 1.00g POOL origin 70.31 origin Vwi-a-tz- 1.00g POOL 0.00 other1 Vwi-a-tz- 1.00g POOL 0.00 other2 Vwi-a-tz- 1.00g POOL 0.00 other3 Vwi-a-tz- 1.00g POOL 0.00 other4 Vwi-a-tz- 1.00g POOL 0.00 other5 Vwi-a-tz- 1.00g POOL 0.00 Mar 13 14:41:11 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB Mar 13 14:41:11 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space. Mar 13 14:41:11 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool. Mar 13 14:41:15 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB Mar 13 14:41:15 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space. Mar 13 14:41:15 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool. Mar 13 14:41:25 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB Mar 13 14:41:25 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space. Mar 13 14:41:25 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool. [...] Version-Release number of selected component (if applicable): 2.6.32-354.el6.x86_64 lvm2-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 lvm2-libs-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 lvm2-cluster-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 udev-147-2.43.el6 BUILT: Thu Oct 11 05:59:38 CDT 2012 device-mapper-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 device-mapper-libs-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 device-mapper-event-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 device-mapper-event-libs-1.02.77-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 cmirror-2.02.98-9.el6 BUILT: Wed Jan 23 10:06:55 CST 2013 How reproducible: Everytime
Device stacking needs to be addressed in multiple areas of lvm2 code base.
The unit test for this is simply to create a pool device on RAID and try to extend it. 1) create RAID LV 2) convert it to thin pool 3) attempt to extend -- FAIL.
Upstream commits: 4c001a7 thin: fix resize of stacked thin pool volume 6552966 thin: fix monitoring of thin pool volume 0670bfe thin: validation catch multiseg thin pool/volumes
Tested and Marking VERIFIED with: lvm2-2.02.100-5.el6.x86_64 tested by successfully running the test suite from comment 1 and the reproducer from comment 8.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1704.html