RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1270997 - auto extension of pool may require multiple reactivations before it happens when threshold is initially turned off
Summary: auto extension of pool may require multiple reactivations before it happens w...
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: lvm2
Version: 7.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Zdenek Kabelac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-10-12 21:13 UTC by Corey Marthaler
Modified: 2021-09-03 12:49 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-12-15 07:37:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2015-10-12 21:13:08 UTC
Description of problem:
I've noticed around 30-40% of the time the automatic thin pool extension doesn't work while residing on a shared volume unless I deactivate and then reactivate a second time the volume. In this case, the threshold is initially *off* and *then* turned on.

============================================================
Iteration 4 of 10 started at Mon Oct 12 15:45:40 CDT 2015
============================================================
SCENARIO - [verify_auto_extension_of_pool_w_thres_initially_off]

Create a thin snapshot and then fill it past the extend threshold, then turn on auto extend monitoring
Making pool volume
lvcreate --activate ey --thinpool POOL  --zero n -L 1G --poolmetadatasize 4M snapper_thinp

Making origin volume
lvcreate --activate ey --virtualsize 1G -T snapper_thinp/POOL -n origin

Making snapshot of origin volume
lvcreate --activate ey -k n -s /dev/snapper_thinp/origin -n auto_extension

Filling snapshot /dev/snapper_thinp/auto_extension
723+0 records in
723+0 records out
758120448 bytes (758 MB) copied, 8.74977 s, 86.6 MB/s

Now enabling thin_pool_autoextend_threshold
    thin_pool_autoextend_threshold = 70
    thin_pool_autoextend_percent = 20
sleep 30

Deactivating origin/snap volume(s)
lvchange -an snapper_thinp/POOL

Activating origin/snap volume(s)
lvchange -ay snapper_thinp/POOL
sleep 45

thin pool doesn't appear to have been extended to 1.2*g


Version-Release number of selected component (if applicable):
3.10.0-322.el7.x86_64

lvm2-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
lvm2-libs-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
lvm2-cluster-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-libs-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-event-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-event-libs-1.02.107-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
device-mapper-persistent-data-0.5.5-1.el7    BUILT: Thu Aug 13 09:58:10 CDT 2015
cmirror-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015
sanlock-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
sanlock-lib-3.2.4-1.el7    BUILT: Fri Jun 19 12:48:49 CDT 2015
lvm2-lockd-2.02.130-2.el7    BUILT: Tue Sep 15 07:15:40 CDT 2015


How reproducible:
Sometimes

Comment 3 Corey Marthaler 2015-10-13 19:00:12 UTC
After more testing, this happens on thin pools on *non* shared VGs as well.

In these cases, after the write up to 75%+ (even with just a 70% threshold), and a monitoring restarted (lvchange --monitor y), and a reactivation, the POOL still isn't expanded some of the times.


============================================================
Iteration 2 of 50 started at Tue Oct 13 13:45:09 CDT 2015
============================================================
SCENARIO - [verify_auto_extension_of_pool_w_thres_initially_off]
Create a thin snapshot and then fill it past the extend threshold, then turn on auto extend monitoring
Making pool volume
lvcreate  --thinpool POOL  --zero y -L 1G --poolmetadatasize 4M snapper_thinp

Sanity checking pool device (POOL) metadata
examining superblock
examining devices tree
examining mapping tree
checking space map counts

Making origin volume
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n origin
lvcreate  -V 1G -T snapper_thinp/POOL -n other1
  WARNING: Sum of all thin volume sizes (2.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other2
  WARNING: Sum of all thin volume sizes (3.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other3
  WARNING: Sum of all thin volume sizes (4.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  --virtualsize 1G -T snapper_thinp/POOL -n other4
  WARNING: Sum of all thin volume sizes (5.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
lvcreate  -V 1G -T snapper_thinp/POOL -n other5
  WARNING: Sum of all thin volume sizes (6.00 GiB) exceeds the size of thin pool snapper_thinp/POOL (1.00 GiB)!
Making snapshot of origin volume
lvcreate  -k n -s /dev/snapper_thinp/origin -n auto_extension
Filling snapshot /dev/snapper_thinp/auto_extension
775+0 records in
775+0 records out
812646400 bytes (813 MB) copied, 7.58207 s, 107 MB/s
Now enabling thin_pool_autoextend_threshold
Starting monitoring again: lvchange --monitor y snapper_thinp/POOL
Deactivating origin/snap volume(s)
lvchange -an snapper_thinp/POOL
Activating origin/snap volume(s)


thin pool doesn't appear to have been extended to 1.2*g

Comment 4 Corey Marthaler 2015-10-21 16:12:08 UTC
FWIW, a full "vgchange -an; vgchange -ay" is a hack for this issue that appears to cause the extension everytime.

Comment 7 RHEL Program Management 2020-12-15 07:37:44 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.


Note You need to log in before you can comment on or make changes to this bug.