Bug 1233909
| Summary: | pool extension doesn't work when tdata is stacked on top of cache volume | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> | |
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | |
| lvm2 sub component: | Thin Provisioning | QA Contact: | cluster-qe <cluster-qe> | |
| Status: | CLOSED ERRATA | Docs Contact: | ||
| Severity: | medium | |||
| Priority: | unspecified | CC: | agk, heinzm, jbrassow, mcsontos, msnitzer, mthacker, prajnoha, prockai, rbednar, thornber, zkabelac | |
| Version: | 7.3 | Keywords: | FutureFeature | |
| Target Milestone: | rc | |||
| Target Release: | --- | |||
| Hardware: | x86_64 | |||
| OS: | Linux | |||
| Whiteboard: | ||||
| Fixed In Version: | lvm2-2.02.175-3.el7 | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | ||
| Clone Of: | ||||
| : | 1251280 (view as bug list) | Environment: | ||
| Last Closed: | 2018-04-10 15:16:02 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1251280, 1371597, 1469559 | |||
Looks like any extension attempt fails. [root@host-117 ~]# lvextend -L +1G snapper_thinp/resize Internal error: _alloc_init called for non-virtual segment with no disk space. In general resize of cached volume is a problem. For now - user needs to drop caching, resize LV and cache LV again. Better logic which will integrate and eventually avoids cache dropping needs to be developed first. So this is RFE and it cannot be regression as this has never worked. Need to fix the internal errors. We can always make a new bug that handles cache resize, but the errors should be fixed with this bug. Fixed in upstream commit (2.02.176): https://www.redhat.com/archives/lvm-devel/2017-October/msg00054.html We already do have bug 1189111 for adding resize of cache volumes. Repeated errors in log should slow-down and repeat less frequently, but they will be there (and user should react on them). Note - when user is caching thin-pool data LV - he *IS* warned that there is no support for extension and he should --uncache, and resize in case thin-pool needs resize. ATM it's our best... Thank you for explanation. Closing bug 1251280 as duplicate of the mentioned bug 1189111. Marking this one verified with latest rpms. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0853 |
Description of problem: This is a standard regression check for automatic pool extension, however with the thin pool tdata stacked on a cached volume. SCENARIO - [verify_auto_extension_of_full_thin_snap] Create a thin snapshot and then fill it past the auto extend threshold Enabling thin_pool_autoextend_threshold # Setting thin_pool_autoextend_threshold to 100 disables automatic # extensions. The minimum value is 50 (A setting below 50 will be treated # as 50). thin_pool_autoextend_threshold = 70 thin_pool_autoextend_percent = 20 Making cache volume to be used as thin pool data volume Converting *cached* volume to thin pool data device lvcreate --zero n -L 1G -n POOL snapper_thinp /dev/sda1 WARNING: Logical volume snapper_thinp/POOL not zeroed. lvcreate -L 1G -n cpool snapper_thinp /dev/sdb1 lvcreate -L 12M -n cpool_meta snapper_thinp /dev/sdb1 Create cache pool volume by combining the cache data and cache metadata (fast) volumes lvconvert --yes --type cache-pool --poolmetadata snapper_thinp/cpool_meta snapper_thinp/cpool WARNING: Converting logical volume snapper_thinp/cpool and snapper_thinp/cpool_meta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Create cached volume by combining the cache pool (fast) and origin (slow) volumes lvconvert --yes --type cache --cachepool snapper_thinp/cpool snapper_thinp/POOL Making linear volume to be used as thin pool meta volume lvcreate --zero n -L 4M -n meta snapper_thinp /dev/sda1 WARNING: Logical volume snapper_thinp/meta not zeroed. Create thin pool volume by combining the cached thin data and meta volumes lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Making origin volume lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin Making snapshot of origin volume lvcreate -K -s /dev/snapper_thinp/origin -n auto_extension [root@host-118 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL twi-aotz-- 1.00g 0.00 1.07 POOL_tdata(0) [POOL_tdata] Cwi-aoC--- 1.00g [cpool] [POOL_tdata_corig] 0.03 1.89 0.00 POOL_tdata_corig(0) [POOL_tdata_corig] owi-aoC--- 1.00g /dev/sda1(0) [POOL_tmeta] ewi-ao---- 4.00m /dev/sda1(256) auto_extension Vwi-a-tz-k 1.00g POOL origin 0.00 [cpool] Cwi---C--- 1.00g 0.03 1.89 0.00 cpool_cdata(0) [cpool_cdata] Cwi-ao---- 1.00g /dev/sdb1(0) [cpool_cmeta] ewi-ao---- 12.00m /dev/sdb1(256) [lvol0_pmspare] ewi------- 12.00m /dev/sdb1(259) origin Vwi-a-tz-- 1.00g POOL 0.00 Filling snapshot /dev/snapper_thinp/auto_extension dd if=/dev/zero of=/dev/snapper_thinp/auto_extension bs=1M count=723 [root@host-118 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL twi-aotz-- 1.00g 70.61 10.06 POOL_tdata(0) [POOL_tdata] Cwi-aoC--- 1.00g [cpool] [POOL_tdata_corig] 0.03 1.89 0.00 POOL_tdata_corig(0) [POOL_tdata_corig] owi-aoC--- 1.00g /dev/sda1(0) [POOL_tmeta] ewi-ao---- 4.00m /dev/sda1(256) auto_extension Vwi-a-tz-k 1.00g POOL origin 70.61 [cpool] Cwi---C--- 1.00g 0.03 1.89 0.00 cpool_cdata(0) [cpool_cdata] Cwi-ao---- 1.00g /dev/sdb1(0) [cpool_cmeta] ewi-ao---- 12.00m /dev/sdb1(256) [lvol0_pmspare] ewi------- 12.00m /dev/sdb1(259) origin Vwi-a-tz-- 1.00g POOL 0.00 Jun 19 11:45:29 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:29 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. Jun 19 11:45:39 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:39 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. Jun 19 11:45:49 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:49 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. Jun 19 11:45:59 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:59 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. [...] [root@host-118 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda1 snapper_thinp lvm2 a-- 24.99g 23.99g /dev/sdb1 snapper_thinp lvm2 a-- 24.99g 23.97g /dev/sdc1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sdd1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sde1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sdf1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sdh1 snapper_thinp lvm2 a-- 24.99g 24.99g Version-Release number of selected component (if applicable): 2.6.32-563.el6.x86_64 lvm2-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 lvm2-libs-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 lvm2-cluster-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 udev-147-2.62.el6 BUILT: Thu Apr 23 05:44:37 CDT 2015 device-mapper-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-libs-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-event-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-event-libs-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 08:43:06 CDT 2014 cmirror-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 How reproducible: Everytime