Hide Forgot
It would be nice if 'lvs -o cache_settings' would show all of the available settings and what they are set to. As it is, only the settings which have been defined are printed. It makes it tough for the user to know what settings are available. [root@bp-01 ~]# lvs -o name,cache_policy,cache_settings vg LV Cache Policy Cache Settings lv mq write_promote_adjustment=2 It could be argued that the current way is the correct way and that users should look elsewhere when trying to figure out what tunables are available and what they do. I'm filing this bug to force us to make a decision. Unless coded into LVM, the tunables and their defaults would have only be available if the LV was active and they could be gotten from the kernel (i.e. dmsetup status). So, the defaults might not be displayable unless the LV is active.
This is tricky, because if we show all values, we lose the information which settings are defaults (and are subject to future change through software updates) and which are overrides (currently we only display the overrides). A compromise option might be to add a new field to lvs, along the lines of cache_settings_active or such, which would show the same thing as dmsetup status, or alternatively cache_settings_default which would show those values from dmsetup status that have not been overridden (in which case cache_settings + cache_settings_default would, when combined, contain each parameter exactly once, overrides in one column and the rest in the other). I am leaning towards the _default way of arranging things, opinions?
I think it's better to display complete set of settings that kernel currently uses - it's also consistent with the other fields we already use to display the "kernel status" - the ones with "kernel" prefix in their field name. So "cache_settings" is for the value set in metadata, while the new "kernel_cache_settings" displays current kernel configuration with all the settings supported and their current values. Upstream commit: https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=1ee6af344bd805d4fa847b95b326c2fe1e52d7cd
Marking this verified in the latest rpms. 2.6.32-616.el6.x86_64 lvm2-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 lvm2-libs-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 lvm2-cluster-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 udev-147-2.71.el6 BUILT: Wed Feb 10 07:07:17 CST 2016 device-mapper-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 device-mapper-libs-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 device-mapper-event-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 device-mapper-event-libs-1.02.117-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 device-mapper-persistent-data-0.6.2-0.1.rc5.el6 BUILT: Wed Feb 24 07:07:09 CST 2016 cmirror-2.02.143-1.el6 BUILT: Wed Feb 24 07:59:50 CST 2016 kernel_cache_settings shows the current value in the kernel and cache_settings shows any value which has been changed. I'll need to file a bug about attempting to change a value that's not currently set in the kernel. [root@host-118 ~]# lvcreate -L 4G -n cacheA cache_sanity /dev/sda1 Logical volume "cacheA" created. [root@host-118 ~]# lvcreate -L 4G -n poolA cache_sanity /dev/sdf1 Logical volume "poolA" created. [root@host-118 ~]# lvcreate -L 12M -n pool_metaA cache_sanity /dev/sdf1 Logical volume "pool_metaA" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 32 --poolmetadata cache_sanity/pool_metaA cache_sanity/poolA WARNING: Converting logical volume cache_sanity/poolA and cache_sanity/pool_metaA to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted cache_sanity/poolA to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool cache_sanity/poolA cache_sanity/cacheA Logical volume cache_sanity/cacheA is now cached. [root@host-118 ~]# lvcreate -L 4G -n cacheB cache_sanity /dev/sda1 Logical volume "cacheB" created. [root@host-118 ~]# lvcreate -L 2G -n poolB cache_sanity /dev/sde1 Logical volume "poolB" created. [root@host-118 ~]# lvcreate -L 8M -n pool_metaB cache_sanity /dev/sde1 Logical volume "pool_metaB" created. [root@host-118 ~]# lvconvert --yes --type cache-pool --cachepolicy cleaner --cachemode writeback --poolmetadata cache_sanity/pool_metaB cache_sanity/poolB WARNING: Converting logical volume cache_sanity/poolB and cache_sanity/pool_metaB to pool's data and metadata volumes. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Converted cache_sanity/poolB to cache pool. [root@host-118 ~]# lvconvert --yes --type cache --cachepool cache_sanity/poolB cache_sanity/cacheB Logical volume cache_sanity/cacheB is now cached. [root@host-118 ~]# lvs -a -o +devices LV VG Attr LSize Pool Data% Meta% Cpy%Sync Devices cacheA cache_sanity Cwi-a-C--- 4.00g [poolA]0.00 8.66 100.00 cacheA_corig(0) [cacheA_corig] cache_sanity owi-aoC--- 4.00g /dev/sda1(0) cacheB cache_sanity Cwi-a-C--- 4.00g [poolB]0.00 3.47 100.00 cacheB_corig(0) [cacheB_corig] cache_sanity owi-aoC--- 4.00g /dev/sda1(1027) [lvol0_pmspare] cache_sanity ewi------- 12.00m /dev/sda1(1024) [poolA] cache_sanity Cwi---C--- 4.00g 0.00 8.66 100.00 poolA_cdata(0) [poolA_cdata] cache_sanity Cwi-ao---- 4.00g /dev/sdf1(0) [poolA_cmeta] cache_sanity ewi-ao---- 12.00m /dev/sdf1(1024) [poolB] cache_sanity Cwi---C--- 2.00g 0.00 3.47 100.00 poolB_cdata(0) [poolB_cdata] cache_sanity Cwi-ao---- 2.00g /dev/sde1(0) [poolB_cmeta] cache_sanity ewi-ao---- 8.00m /dev/sde1(512) [root@host-118 ~]# lvs -o name,cache_policy,kernel_cache_settings LV Cache Policy KCache Settings cacheA mq migration_threshold=2048,random_threshold=4,sequential_threshold=512,discard_promote_adjustment=1,read_promote_adjustment=4,write_promote_adjustment=8 cacheB cleaner migration_threshold=2048 [root@host-118 ~]# lvs -o name,cache_policy,cache_settings LV Cache Policy Cache Settings cacheA mq cacheB cleaner [root@host-118 ~]# lvchange --cachesettings discard_promote_adjustment=0 cache_sanity/cacheA cache_sanity/cacheB Logical volume "cacheA" changed. device-mapper: reload ioctl on (253:6) failed: Invalid argument Failed to lock logical volume cache_sanity/cacheB. [root@host-118 ~]# lvchange --cachesettings sequential_threshold=1024 cache_sanity/cacheA Logical volume "cacheA" changed. [root@host-118 ~]# lvchange --cachesettings sequential_threshold=1024 cache_sanity/cacheB device-mapper: reload ioctl on (253:6) failed: Invalid argument Failed to lock logical volume cache_sanity/cacheB. [root@host-118 ~]# lvchange --cachesettings migration_threshold=4096 cache_sanity/cacheA Logical volume "cacheA" changed. [root@host-118 ~]# lvchange --cachesettings migration_threshold=4096 cache_sanity/cacheB Logical volume "cacheB" changed. [root@host-118 ~]# lvs -o name,cache_policy,kernel_cache_settings LV Cache Policy KCache Settings cacheA mq migration_threshold=4096,random_threshold=4,sequential_threshold=1024,discard_promote_adjustment=0,read_promote_adjustment=4,write_promote_adjustment=8 cacheB cleaner migration_threshold=4096 [root@host-118 ~]# lvs -o name,cache_policy,cache_settings LV Cache Policy Cache Settings cacheA mq migration_threshold=4096,discard_promote_adjustment=0,sequential_threshold=1024 cacheB cleaner migration_threshold=4096
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0964.html