Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionCorey Marthaler
2015-06-19 17:09:29 UTC
Description of problem:
This is a standard regression check for automatic pool extension, however with the thin pool tdata stacked on a cached volume.
SCENARIO - [verify_auto_extension_of_full_thin_snap]
Create a thin snapshot and then fill it past the auto extend threshold
Enabling thin_pool_autoextend_threshold
# Setting thin_pool_autoextend_threshold to 100 disables automatic
# extensions. The minimum value is 50 (A setting below 50 will be treated
# as 50).
thin_pool_autoextend_threshold = 70
thin_pool_autoextend_percent = 20
Making cache volume to be used as thin pool data volume
Converting *cached* volume to thin pool data device
lvcreate --zero n -L 1G -n POOL snapper_thinp /dev/sda1
WARNING: Logical volume snapper_thinp/POOL not zeroed.
lvcreate -L 1G -n cpool snapper_thinp /dev/sdb1
lvcreate -L 12M -n cpool_meta snapper_thinp /dev/sdb1
Create cache pool volume by combining the cache data and cache metadata (fast) volumes
lvconvert --yes --type cache-pool --poolmetadata snapper_thinp/cpool_meta snapper_thinp/cpool
WARNING: Converting logical volume snapper_thinp/cpool and snapper_thinp/cpool_meta to pool's data and metadata volumes.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Create cached volume by combining the cache pool (fast) and origin (slow) volumes
lvconvert --yes --type cache --cachepool snapper_thinp/cpool snapper_thinp/POOL
Making linear volume to be used as thin pool meta volume
lvcreate --zero n -L 4M -n meta snapper_thinp /dev/sda1
WARNING: Logical volume snapper_thinp/meta not zeroed.
Create thin pool volume by combining the cached thin data and meta volumes
lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes
WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes.
THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
Making origin volume
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin
Making snapshot of origin volume
lvcreate -K -s /dev/snapper_thinp/origin -n auto_extension
[root@host-118 ~]# lvs -a -o +devices
LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices
POOL twi-aotz-- 1.00g 0.00 1.07 POOL_tdata(0)
[POOL_tdata] Cwi-aoC--- 1.00g [cpool] [POOL_tdata_corig] 0.03 1.89 0.00 POOL_tdata_corig(0)
[POOL_tdata_corig] owi-aoC--- 1.00g /dev/sda1(0)
[POOL_tmeta] ewi-ao---- 4.00m /dev/sda1(256)
auto_extension Vwi-a-tz-k 1.00g POOL origin 0.00
[cpool] Cwi---C--- 1.00g 0.03 1.89 0.00 cpool_cdata(0)
[cpool_cdata] Cwi-ao---- 1.00g /dev/sdb1(0)
[cpool_cmeta] ewi-ao---- 12.00m /dev/sdb1(256)
[lvol0_pmspare] ewi------- 12.00m /dev/sdb1(259)
origin Vwi-a-tz-- 1.00g POOL 0.00
Filling snapshot /dev/snapper_thinp/auto_extension
dd if=/dev/zero of=/dev/snapper_thinp/auto_extension bs=1M count=723
[root@host-118 ~]# lvs -a -o +devices
LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices
POOL twi-aotz-- 1.00g 70.61 10.06 POOL_tdata(0)
[POOL_tdata] Cwi-aoC--- 1.00g [cpool] [POOL_tdata_corig] 0.03 1.89 0.00 POOL_tdata_corig(0)
[POOL_tdata_corig] owi-aoC--- 1.00g /dev/sda1(0)
[POOL_tmeta] ewi-ao---- 4.00m /dev/sda1(256)
auto_extension Vwi-a-tz-k 1.00g POOL origin 70.61
[cpool] Cwi---C--- 1.00g 0.03 1.89 0.00 cpool_cdata(0)
[cpool_cdata] Cwi-ao---- 1.00g /dev/sdb1(0)
[cpool_cmeta] ewi-ao---- 12.00m /dev/sdb1(256)
[lvol0_pmspare] ewi------- 12.00m /dev/sdb1(259)
origin Vwi-a-tz-- 1.00g POOL 0.00
Jun 19 11:45:29 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Jun 19 11:45:29 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool.
Jun 19 11:45:39 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Jun 19 11:45:39 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool.
Jun 19 11:45:49 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Jun 19 11:45:49 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool.
Jun 19 11:45:59 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space.
Jun 19 11:45:59 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool.
[...]
[root@host-118 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 snapper_thinp lvm2 a-- 24.99g 23.99g
/dev/sdb1 snapper_thinp lvm2 a-- 24.99g 23.97g
/dev/sdc1 snapper_thinp lvm2 a-- 24.99g 24.99g
/dev/sdd1 snapper_thinp lvm2 a-- 24.99g 24.99g
/dev/sde1 snapper_thinp lvm2 a-- 24.99g 24.99g
/dev/sdf1 snapper_thinp lvm2 a-- 24.99g 24.99g
/dev/sdh1 snapper_thinp lvm2 a-- 24.99g 24.99g
Version-Release number of selected component (if applicable):
2.6.32-563.el6.x86_64
lvm2-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
lvm2-libs-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
lvm2-cluster-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
udev-147-2.62.el6 BUILT: Thu Apr 23 05:44:37 CDT 2015
device-mapper-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
device-mapper-libs-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
device-mapper-event-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
device-mapper-event-libs-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 08:43:06 CDT 2014
cmirror-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015
How reproducible:
Everytime
Looks like any extension attempt fails.
[root@host-117 ~]# lvextend -L +1G snapper_thinp/resize
Internal error: _alloc_init called for non-virtual segment with no disk space.
In general resize of cached volume is a problem.
For now - user needs to drop caching, resize LV and cache LV again.
Better logic which will integrate and eventually avoids cache dropping needs to be developed first.
So this is RFE and it cannot be regression as this has never worked.
Comment 11Jonathan Earl Brassow
2017-09-29 14:29:07 UTC
Need to fix the internal errors. We can always make a new bug that handles cache resize, but the errors should be fixed with this bug.
We already do have bug 1189111 for adding resize of cache volumes.
Repeated errors in log should slow-down and repeat less frequently,
but they will be there (and user should react on them).
Note - when user is caching thin-pool data LV - he *IS* warned
that there is no support for extension and he should --uncache,
and resize in case thin-pool needs resize.
ATM it's our best...
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHEA-2018:0853
Description of problem: This is a standard regression check for automatic pool extension, however with the thin pool tdata stacked on a cached volume. SCENARIO - [verify_auto_extension_of_full_thin_snap] Create a thin snapshot and then fill it past the auto extend threshold Enabling thin_pool_autoextend_threshold # Setting thin_pool_autoextend_threshold to 100 disables automatic # extensions. The minimum value is 50 (A setting below 50 will be treated # as 50). thin_pool_autoextend_threshold = 70 thin_pool_autoextend_percent = 20 Making cache volume to be used as thin pool data volume Converting *cached* volume to thin pool data device lvcreate --zero n -L 1G -n POOL snapper_thinp /dev/sda1 WARNING: Logical volume snapper_thinp/POOL not zeroed. lvcreate -L 1G -n cpool snapper_thinp /dev/sdb1 lvcreate -L 12M -n cpool_meta snapper_thinp /dev/sdb1 Create cache pool volume by combining the cache data and cache metadata (fast) volumes lvconvert --yes --type cache-pool --poolmetadata snapper_thinp/cpool_meta snapper_thinp/cpool WARNING: Converting logical volume snapper_thinp/cpool and snapper_thinp/cpool_meta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Create cached volume by combining the cache pool (fast) and origin (slow) volumes lvconvert --yes --type cache --cachepool snapper_thinp/cpool snapper_thinp/POOL Making linear volume to be used as thin pool meta volume lvcreate --zero n -L 4M -n meta snapper_thinp /dev/sda1 WARNING: Logical volume snapper_thinp/meta not zeroed. Create thin pool volume by combining the cached thin data and meta volumes lvconvert --thinpool snapper_thinp/POOL --poolmetadata meta --yes WARNING: Converting logical volume snapper_thinp/POOL and snapper_thinp/meta to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Making origin volume lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin Making snapshot of origin volume lvcreate -K -s /dev/snapper_thinp/origin -n auto_extension [root@host-118 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL twi-aotz-- 1.00g 0.00 1.07 POOL_tdata(0) [POOL_tdata] Cwi-aoC--- 1.00g [cpool] [POOL_tdata_corig] 0.03 1.89 0.00 POOL_tdata_corig(0) [POOL_tdata_corig] owi-aoC--- 1.00g /dev/sda1(0) [POOL_tmeta] ewi-ao---- 4.00m /dev/sda1(256) auto_extension Vwi-a-tz-k 1.00g POOL origin 0.00 [cpool] Cwi---C--- 1.00g 0.03 1.89 0.00 cpool_cdata(0) [cpool_cdata] Cwi-ao---- 1.00g /dev/sdb1(0) [cpool_cmeta] ewi-ao---- 12.00m /dev/sdb1(256) [lvol0_pmspare] ewi------- 12.00m /dev/sdb1(259) origin Vwi-a-tz-- 1.00g POOL 0.00 Filling snapshot /dev/snapper_thinp/auto_extension dd if=/dev/zero of=/dev/snapper_thinp/auto_extension bs=1M count=723 [root@host-118 ~]# lvs -a -o +devices LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices POOL twi-aotz-- 1.00g 70.61 10.06 POOL_tdata(0) [POOL_tdata] Cwi-aoC--- 1.00g [cpool] [POOL_tdata_corig] 0.03 1.89 0.00 POOL_tdata_corig(0) [POOL_tdata_corig] owi-aoC--- 1.00g /dev/sda1(0) [POOL_tmeta] ewi-ao---- 4.00m /dev/sda1(256) auto_extension Vwi-a-tz-k 1.00g POOL origin 70.61 [cpool] Cwi---C--- 1.00g 0.03 1.89 0.00 cpool_cdata(0) [cpool_cdata] Cwi-ao---- 1.00g /dev/sdb1(0) [cpool_cmeta] ewi-ao---- 12.00m /dev/sdb1(256) [lvol0_pmspare] ewi------- 12.00m /dev/sdb1(259) origin Vwi-a-tz-- 1.00g POOL 0.00 Jun 19 11:45:29 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:29 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. Jun 19 11:45:39 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:39 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. Jun 19 11:45:49 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:49 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. Jun 19 11:45:59 host-118 lvm[2642]: Internal error: _alloc_init called for non-virtual segment with no disk space. Jun 19 11:45:59 host-118 lvm[2642]: Failed to extend thin snapper_thinp-POOL-tpool. [...] [root@host-118 ~]# pvs PV VG Fmt Attr PSize PFree /dev/sda1 snapper_thinp lvm2 a-- 24.99g 23.99g /dev/sdb1 snapper_thinp lvm2 a-- 24.99g 23.97g /dev/sdc1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sdd1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sde1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sdf1 snapper_thinp lvm2 a-- 24.99g 24.99g /dev/sdh1 snapper_thinp lvm2 a-- 24.99g 24.99g Version-Release number of selected component (if applicable): 2.6.32-563.el6.x86_64 lvm2-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 lvm2-libs-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 lvm2-cluster-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 udev-147-2.62.el6 BUILT: Thu Apr 23 05:44:37 CDT 2015 device-mapper-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-libs-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-event-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-event-libs-1.02.95-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 device-mapper-persistent-data-0.3.2-1.el6 BUILT: Fri Apr 4 08:43:06 CDT 2014 cmirror-2.02.118-3.el6 BUILT: Wed Jun 17 09:40:21 CDT 2015 How reproducible: Everytime