Bug 1087648

Summary: auto extension of thin pools that have passed their threshold with the "default" profile doesn't appear to work
Product: Red Hat Enterprise Linux 7 Reporter: Corey Marthaler <cmarthal>
Component: lvm2Assignee: LVM and device-mapper development team <lvm-team>
lvm2 sub component: Thin Provisioning QA Contact: Cluster QE <mspqa-list>
Status: CLOSED NOTABUG Docs Contact:
Severity: high    
Priority: unspecified CC: agk, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, prockai, thornber, zkabelac
Version: 7.0   
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-04-15 14:46:13 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
-vvvv of the lvcreate w/ "default" profile
none
-vvvv of the lvcreate w/ "thin-performance" profile none

Description Corey Marthaler 2014-04-14 23:24:06 UTC
Description of problem:
Looks like autoextend may be broken with the default profile set.


### THIS WORKS:
SCENARIO - [verify_auto_extension_of_full_snap]
Create a thin snapshot and then fill it past the auto extend threshold
Enabling thin_pool_autoextend_threshold

( ** PROFILE: thin-performance **)
lvcreate --thinpool POOL --profile thin-performance --zero n -L 1G snapper_thinp

Sanity checking pool device metadata
(thin_check /dev/mapper/snapper_thinp-POOL_tmeta)
examining superblock
examining devices tree
examining mapping tree
Making snapshot of origin volume
lvcreate -K -s /dev/snapper_thinp/origin -n auto_extension
Filling snapshot /dev/snapper_thinp/auto_extension
723+0 records in
723+0 records out
758120448 bytes (758 MB) copied, 9.13378 s, 83.0 MB/s

Apr 14 17:39:45 host-053 qarshd[15584]: Running cmdline: dd if=/dev/zero of=/dev/snapper_thinp/auto_extension bs=1M count=723
Apr 14 17:39:54 host-053 lvm[2018]: Extending logical volume POOL_tdata to 1.20 GiB
Apr 14 17:39:54 host-053 kernel: device-mapper: thin: Data device (dm-3) discard unsupported: Disabling discard passdown.
Apr 14 17:39:54 host-053 kernel: device-mapper: thin: 253:4: growing the data device from 2048 to 2464 blocks
Apr 14 17:39:54 host-053 lvm[2018]: Monitoring thin snapper_thinp-POOL-tpool.
Apr 14 17:39:54 host-053 lvm[2018]: Logical volume POOL successfully resized

Removing snap volume snapper_thinp/auto_extension
lvremove -f /dev/snapper_thinp/auto_extension
Removing thin origin and other virtual thin volumes
Removing thinpool snapper_thinp/POOL
Disabling snapshot_autoextend_threshold




### THIS DOES NOT
SCENARIO - [verify_auto_extension_of_full_snap]
Create a thin snapshot and then fill it past the auto extend threshold
Enabling thin_pool_autoextend_threshold

( ** PROFILE: default **)
lvcreate --thinpool POOL --profile default --zero n -L 1G snapper_thinp

Sanity checking pool device metadata
(thin_check /dev/mapper/snapper_thinp-POOL_tmeta)
examining superblock
examining devices tree
examining mapping tree
lvcreate --virtualsize 1G -T snapper_thinp/POOL -n origin
Making snapshot of origin volume
lvcreate -K -s /dev/snapper_thinp/origin -n auto_extension
Filling snapshot /dev/snapper_thinp/auto_extension
723+0 records in
723+0 records out
758120448 bytes (758 MB) copied, 9.96419 s, 76.1 MB/s

thin pool doesn't appear to have been extended to 1.2*g




Version-Release number of selected component (if applicable):
3.10.0-116.el7.x86_64
lvm2-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-libs-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
lvm2-cluster-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-event-libs-1.02.84-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014
device-mapper-persistent-data-0.3.0-1.el7    BUILT: Fri Mar 28 07:42:24 CDT 2014
cmirror-2.02.105-14.el7    BUILT: Wed Mar 26 08:29:41 CDT 2014


How reproducible:
Everytime

Comment 1 Corey Marthaler 2014-04-14 23:42:33 UTC
Created attachment 886317 [details]
-vvvv of the lvcreate w/ "default" profile

Comment 2 Corey Marthaler 2014-04-14 23:43:16 UTC
Created attachment 886318 [details]
-vvvv of the lvcreate w/ "thin-performance" profile

Comment 4 Peter Rajnoha 2014-04-15 08:29:49 UTC
The default.profile defines 
   activation/thin_pool_autoextend_threshold = 100

In case you use thin-performance.profile, this value is not defined in the profile and hence the "master" value defined in lvm.conf is used then. What's the value of activation/thin_pool_autoextend_threshold in your lvm.conf file?
If it's other than 100, then it would explain the behaviour as reported here.

Comment 5 Peter Rajnoha 2014-04-15 08:31:49 UTC
Hint: if you want to see the exact configuration used by LVM (merged lvm.conf and profile config), you can use:
  lvm dumpconfig --profile <some_profile> --mergedconfig

Comment 6 Corey Marthaler 2014-04-15 14:28:33 UTC
The test case knows how to properly turn on the autoextend threshold, this is what's in the lvm.conf:

    thin_pool_autoextend_threshold = 70
    thin_pool_autoextend_percent = 20

But, you're saying that by then calling the "default" profile on the lvcreate cmd line that I'm overriding what's in the lvm.conf and using the "default" which is to have it turned off again?

Comment 7 Corey Marthaler 2014-04-15 14:46:13 UTC
So looks like I answered my own question, it does turn it back off. So not a bug I guess. Odd though.

lvm dumpconfig --profile default --mergedconfig

activation {
        checks=0
        udev_sync=1
        udev_rules=1
        verify_udev_operations=0
        retry_deactivation=1
        missing_stripe_filler="error"
        use_linear_target=1
        reserved_stack=64
        reserved_memory=8192
        process_priority=-18
        raid_region_size=512
        readahead="auto"
        raid_fault_policy="warn"
        mirror_log_fault_policy="allocate"
        mirror_image_fault_policy="remove"
        snapshot_autoextend_threshold=100
        snapshot_autoextend_percent=20
        thin_pool_autoextend_threshold=100
        thin_pool_autoextend_percent=20
        use_mlockall=0
        monitoring=1
        polling_interval=15
}

Comment 8 Peter Rajnoha 2014-04-16 07:31:53 UTC
(In reply to Corey Marthaler from comment #7)
> So looks like I answered my own question, it does turn it back off. So not a
> bug I guess. Odd though.

It's just an overlay of configs (a "config cascade"):

direct config override on command line (the "--config" cmd option)  --->  profile  config (either "--profile" cmd option or the one attached in metadata) --->  tag  config ---> /etc/lvm/lvm.conf ---> default value hardcoded in the LVM binary itself

And evaluated from left to right. The first value found is used.
The default.profile contains all profilable configuration settings with default values assigned.