RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 921280 - thin_pool_autoextend_threshold does not work when thin pool is a stacked raid device
Summary: thin_pool_autoextend_threshold does not work when thin pool is a stacked raid...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.4
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 960054
TreeView+ depends on / blocked
 
Reported: 2013-03-13 20:49 UTC by Corey Marthaler
Modified: 2013-11-21 23:21 UTC (History)
13 users (show)

Fixed In Version: lvm2-2.02.100-3.el6
Doc Type: Enhancement
Doc Text:
Support for more complicated device stack for thinpool has been enhanced to properly support resize of more complex volumes like mirrors or raids. The new lvm2 version now support thin data volume extension on raids. Support for mirrors has been deactivated.
Clone Of:
Environment:
Last Closed: 2013-11-21 23:21:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:1704 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2013-11-20 21:52:01 UTC

Description Corey Marthaler 2013-03-13 20:49:22 UTC
Description of problem:
The thin_pool_autoextend_threshold feature works fine when using a linear thin pool volume, however, when stacking the thin pool volume on top of a raid device, it does not. 

./snapper_thinp -e verify_auto_extension_of_full_snap -t raid1

SCENARIO - [verify_auto_extension_of_full_snap]
Create a thin snapshot and then fill it past the auto extend threshold
Enabling thin_pool_autoextend_threshold
Making origin volume
Converting *Raid* volumes to thin pool and thin pool metadata devices
lvcreate --type raid1 -m 1 -L 1G -n POOL snapper_thinp
lvcreate --type raid1 -m 1 -L 1G -n meta snapper_thinp
Waiting until all mirror|raid volumes become fully syncd...
   0/2 mirror(s) are fully synced: ( 29.49% 21.01% )
   0/2 mirror(s) are fully synced: ( 50.51% 46.45% )
   0/2 mirror(s) are fully synced: ( 74.05% 68.01% )
   1/2 mirror(s) are fully synced: ( 100.00% 93.04% )
   2/2 mirror(s) are fully synced: ( 100.00% 100.00% )
lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n origin
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other1
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other2
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other3
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other4
lvcreate --virtualsize 1G --thinpool snapper_thinp/POOL -n other5
Making snapshot of origin volume
lvcreate -s /dev/snapper_thinp/origin -n auto_extension
Filling snapshot /dev/snapper_thinp/auto_extension
720+0 records in
720+0 records out
754974720 bytes (755 MB) copied, 24.2149 s, 31.2 MB/s
thin pool doesn't appear to have been extended to 1.20g


[root@taft-02 ~]# pvscan
  PV /dev/sdd1   VG snapper_thinp   lvm2 [135.66 GiB / 133.66 GiB free]
  PV /dev/sdh1   VG snapper_thinp   lvm2 [135.66 GiB / 133.66 GiB free]
  PV /dev/sdf1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]
  PV /dev/sdc1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]
  PV /dev/sde1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]
  PV /dev/sdg1   VG snapper_thinp   lvm2 [135.66 GiB / 135.66 GiB free]

 LV                    Attr      LSize  Pool Origin Data%  Cpy%Sync Devices
 POOL                  twi-a-tz-  1.00g              70.31          POOL_tdata(0)
 [POOL_tdata]          rwi-aot--  1.00g                      100.00 POOL_tdata_rimage_0(0),POOL_tdata_rimage_1(0)
 [POOL_tdata_rimage_0] iwi-aor--  1.00g                             /dev/sdd1(1)
 [POOL_tdata_rimage_1] iwi-aor--  1.00g                             /dev/sdh1(1)
 [POOL_tdata_rmeta_0]  ewi-aor--  4.00m                             /dev/sdd1(0)
 [POOL_tdata_rmeta_1]  ewi-aor--  4.00m                             /dev/sdh1(0)
 [POOL_tmeta]          rwi-aot--  1.00g                      100.00 POOL_tmeta_rimage_0(0),POOL_tmeta_rimage_1(0)
 [POOL_tmeta_rimage_0] iwi-aor--  1.00g                             /dev/sdd1(258)
 [POOL_tmeta_rimage_1] iwi-aor--  1.00g                             /dev/sdh1(258)
 [POOL_tmeta_rmeta_0]  ewi-aor--  4.00m                             /dev/sdd1(257)
 [POOL_tmeta_rmeta_1]  ewi-aor--  4.00m                             /dev/sdh1(257)
 auto_extension        Vwi-a-tz-  1.00g POOL origin  70.31
 origin                Vwi-a-tz-  1.00g POOL          0.00
 other1                Vwi-a-tz-  1.00g POOL          0.00
 other2                Vwi-a-tz-  1.00g POOL          0.00
 other3                Vwi-a-tz-  1.00g POOL          0.00
 other4                Vwi-a-tz-  1.00g POOL          0.00
 other5                Vwi-a-tz-  1.00g POOL          0.00


Mar 13 14:41:11 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB
Mar 13 14:41:11 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Mar 13 14:41:11 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool.
Mar 13 14:41:15 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB
Mar 13 14:41:15 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Mar 13 14:41:15 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool.
Mar 13 14:41:25 taft-02 lvm[1254]: Extending logical volume POOL to 1.20 GiB
Mar 13 14:41:25 taft-02 lvm[1254]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Mar 13 14:41:25 taft-02 lvm[1254]: Failed to extend thin snapper_thinp-POOL-tpool.
[...]


Version-Release number of selected component (if applicable):
2.6.32-354.el6.x86_64
lvm2-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-libs-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
lvm2-cluster-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
udev-147-2.43.el6    BUILT: Thu Oct 11 05:59:38 CDT 2012
device-mapper-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
device-mapper-event-libs-1.02.77-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013
cmirror-2.02.98-9.el6    BUILT: Wed Jan 23 10:06:55 CST 2013


How reproducible:
Everytime

Comment 1 Zdenek Kabelac 2013-03-14 09:16:02 UTC
Device stacking needs to be addressed in multiple areas of lvm2 code base.

Comment 8 Jonathan Earl Brassow 2013-08-28 16:19:53 UTC
The unit test for this is simply to create a pool device on RAID and try to extend it.

1) create RAID LV
2) convert it to thin pool
3) attempt to extend -- FAIL.

Comment 10 Peter Rajnoha 2013-09-12 09:25:19 UTC
Upstream commits:
4c001a7 thin: fix resize of stacked thin pool volume
6552966 thin: fix monitoring of thin pool volume
0670bfe thin: validation catch multiseg thin pool/volumes

Comment 12 Nenad Peric 2013-10-14 12:32:53 UTC
Tested and Marking VERIFIED with:

lvm2-2.02.100-5.el6.x86_64


tested by successfully running the test suite from comment 1  and the reproducer from comment 8.

Comment 13 errata-xmlrpc 2013-11-21 23:21:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1704.html


Note You need to log in before you can comment on or make changes to this bug.