Bug 921280 - thin_pool_autoextend_threshold does not work when thin pool is a stacked raid device
thin_pool_autoextend_threshold does not work when thin pool is a stacked raid...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.4
x86_64 Linux
high Severity high
: rc
: ---
Assigned To: Zdenek Kabelac
Cluster QE
:
Depends On:
Blocks: 960054
  Show dependency treegraph
 
Reported: 2013-03-13 16:49 EDT by Corey Marthaler
Modified: 2013-11-21 18:21 EST (History)
13 users (show)

See Also:
Fixed In Version: lvm2-2.02.100-3.el6
Doc Type: Enhancement
Doc Text:
Support for more complicated device stack for thinpool has been enhanced to properly support resize of more complex volumes like mirrors or raids. The new lvm2 version now support thin data volume extension on raids. Support for mirrors has been deactivated.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2013-11-21 18:21:59 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2013-03-13 16:49:22 EDT
Description of problem:
The thin_pool_autoextend_threshold feature works fine when using a linear thin pool volume, however, when stacking the thin pool volume on top of a raid device, it does not. 

./snapper_thinp -e verify_auto_extension_of_full_snap -t raid1

SCENARIO - [verify_auto_extension_of_full_snap]
Create a thin snapshot and then fill it past the auto extend threshold
Enabling thin_pool_autoextend_threshold
Making origin volume
Converting *Raid* volumes to thin pool and thin pool metadata devices
lvcreate --type raid1 -m 1 -L 1G -n POOL snapper_thinp
lvcreate --type raid1 -m 1 -L 1G -n meta snapper_thinp
Waiting until all mirror|raid volumes become fully syncd...
   0/2 mirror(s) are fully synced: ( 29.49% 21.01% )
   0/2 mirror(s) are fully synced: ( 50.51% 46.45% )
   0/2 mirror(s) are fully synced: ( 74.05% 68.01% )
   1/2 mirror(s) are fully synced: ( 100.00% 93.04% )
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )
lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n origin
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other1
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other2
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other3
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other4
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other5
Making snapshot of origin volume
lvcreate -s /dev/snapper_thinp/origin -n auto_extension
Filling snapshot /dev/snapper_thinp/auto_extension
720+0 records in
720+0 records out
754974720 bytes (755 MB) copied, 24.2149 s, 31.2 MB/s
thin pool doesn't appear to have been extended to 1.20g


[root@taft-02 ~]# pvscan
  PV /dev/sdd1   VG snapper_thinp   lvm2 [135.66 GiB / 133.66 GiB free]
  PV /dev/sdh1   VG snapper_thinp   lvm2 [135.66 GiB / 133.66 GiB free]
  PV /dev/sdf1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]
  PV /dev/sdc1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]
  PV /dev/sde1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]
  PV /dev/sdg1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]

 LV                    Attr      LSize  Pool Origin Data%  Cpy%Sync Devices
 POOL                  twi-a-tz-  1.00g              70.31          POOL_tdata(0)
 [POOL_tdata]          rwi-aot--  1.00g                      100.00 POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0)
 [POOL_tdata_rimage_0] iwi-aor--  1.00g                             /dev/sdd1(1)
 [POOL_tdata_rimage_1] iwi-aor--  1.00g                             /dev/sdh1(1)
 [POOL_tdata_rmeta_0]  ewi-aor--  4.00m                             /dev/sdd1(0)
 [POOL_tdata_rmeta_1]  ewi-aor--  4.00m                             /dev/sdh1(0)
 [POOL_tmeta]          rwi-aot--  1.00g                      100.00 POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)
 [POOL_tmeta_rimage_0] iwi-aor--  1.00g                             /dev/sdd1(258)
 [POOL_tmeta_rimage_1] iwi-aor--  1.00g                             /dev/sdh1(258)
 [POOL_tmeta_rmeta_0]  ewi-aor--  4.00m                             /dev/sdd1(257)
 [POOL_tmeta_rmeta_1]  ewi-aor--  4.00m                             /dev/sdh1(257)
 auto_extension        Vwi-a-tz-  1.00g POOL origin  70.31
 origin                Vwi-a-tz-  1.00g POOL          0.00
 other1                Vwi-a-tz-  1.00g POOL          0.00
 other2                Vwi-a-tz-  1.00g POOL          0.00
 other3                Vwi-a-tz-  1.00g POOL          0.00
 other4                Vwi-a-tz-  1.00g POOL          0.00
 other5                Vwi-a-tz-  1.00g POOL          0.00


Mar 13 14:41:11 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB
Mar 13 14:41:11 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Mar 13 14:41:11 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool.
Mar 13 14:41:15 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB
Mar 13 14:41:15 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Mar 13 14:41:15 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool.
Mar 13 14:41:25 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB
Mar 13 14:41:25 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Mar 13 14:41:25 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool.
[...]


Version-Release number of selected component (if applicable):
2.6.32-354.el6.x86_64
lvm2-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-libs-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-cluster-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
udev-147-2.43.el6    BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
cmirror-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013


How reproducible:
Everytime
Comment 1 Zdenek Kabelac 2013-03-14 05:16:02 EDT
Device stacking needs to be addressed in multiple areas of lvm2 code base.
Comment 8 Jonathan Earl Brassow 2013-08-28 12:19:53 EDT
The unit test for this is simply to create a pool device on RAID and try to extend it.

1) create RAID LV
2) convert it to thin pool
3) attempt to extend -- FAIL.
Comment 10 Peter Rajnoha 2013-09-12 05:25:19 EDT
Upstream commits:
4c001a7 thin: fix resize of stacked thin pool volume
6552966 thin: fix monitoring of thin pool volume
0670bfe thin: validation catch multiseg thin pool/volumes
Comment 12 Nenad Peric 2013-10-14 08:32:53 EDT
Tested and Marking VERIFIED with:

lvm2-2.02.100-5.el6.x86_64


tested by successfully running the test suite from comment 1  and the reproducer from comment 8.
Comment 13 errata-xmlrpc 2013-11-21 18:21:59 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1704.html

Note You need to log in before you can comment on or make changes to this bug.