RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1194786 - LVM cache: 'lvs -o cache_settings' should show default values
Summary: LVM cache: 'lvs -o cache_settings' should show default values
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2
Version: 6.8
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Rajnoha
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-20 18:04 UTC by Jonathan Earl Brassow
Modified: 2016-05-11 01:16 UTC (History)
7 users (show)

Fixed In Version: lvm2-2.02.140-1.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-05-11 01:16:00 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:0964 0 normal SHIPPED_LIVE lvm2 bug fix and enhancement update 2016-05-10 22:57:40 UTC

Description Jonathan Earl Brassow 2015-02-20 18:04:14 UTC
It would be nice if 'lvs -o cache_settings' would show all of the available settings and what they are set to.  As it is, only the settings which have been defined are printed.  It makes it tough for the user to know what settings are available.

[root@bp-01 ~]# lvs -o name,cache_policy,cache_settings vg
  LV   Cache Policy Cache Settings
  lv   mq           write_promote_adjustment=2

It could be argued that the current way is the correct way and that users should look elsewhere when trying to figure out what tunables are available and what they do.  I'm filing this bug to force us to make a decision.

Unless coded into LVM, the tunables and their defaults would have only be available if the LV was active and they could be gotten from the kernel (i.e. dmsetup status).  So, the defaults might not be displayable unless the LV is active.

Comment 1 Petr Rockai 2015-02-25 17:42:54 UTC
This is tricky, because if we show all values, we lose the information which settings are defaults (and are subject to future change through software updates) and which are overrides (currently we only display the overrides). A compromise option might be to add a new field to lvs, along the lines of cache_settings_active or such, which would show the same thing as dmsetup status, or alternatively cache_settings_default which would show those values from dmsetup status that have not been overridden (in which case cache_settings + cache_settings_default would, when combined, contain each parameter exactly once, overrides in one column and the rest in the other). I am leaning towards the _default way of arranging things, opinions?

Comment 2 Peter Rajnoha 2016-01-18 13:50:22 UTC
I think it's better to display complete set of settings that kernel currently uses - it's also consistent with the other fields we already use to display the "kernel status" - the ones with "kernel" prefix in their field name.

So "cache_settings" is for the value set in metadata, while the new "kernel_cache_settings" displays current kernel configuration with all the settings supported and their current values.

Upstream commit:
https://git.fedorahosted.org/cgit/lvm2.git/commit/?id=1ee6af344bd805d4fa847b95b326c2fe1e52d7cd

Comment 8 Corey Marthaler 2016-02-25 20:06:23 UTC
Marking this verified in the latest rpms. 

2.6.32-616.el6.x86_64
lvm2-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-libs-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-cluster-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
udev-147-2.71.el6    BUILT: Wed Feb 10 07:07:17 CST 2016
device-mapper-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-libs-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-libs-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-persistent-data-0.6.2-0.1.rc5.el6    BUILT: Wed Feb 24 07:07:09 CST 2016
cmirror-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016


kernel_cache_settings shows the current value in the kernel and cache_settings shows any value which has been changed.

I'll need to file a bug about attempting to change a value that's not currently set in the kernel.


[root@host-118 ~]#  lvcreate -L 4G -n cacheA cache_sanity /dev/sda1
  Logical volume "cacheA" created.
[root@host-118 ~]# lvcreate -L 4G -n poolA cache_sanity /dev/sdf1
  Logical volume "poolA" created.
[root@host-118 ~]# lvcreate -L 12M -n pool_metaA cache_sanity /dev/sdf1
  Logical volume "pool_metaA" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 32 --poolmetadata cache_sanity/pool_metaA cache_sanity/poolA
  WARNING: Converting logical volume cache_sanity/poolA and cache_sanity/pool_metaA to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/poolA to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool cache_sanity/poolA cache_sanity/cacheA
  Logical volume cache_sanity/cacheA is now cached.


[root@host-118 ~]# lvcreate -L 4G -n cacheB cache_sanity /dev/sda1
  Logical volume "cacheB" created.
[root@host-118 ~]# lvcreate -L 2G -n poolB cache_sanity /dev/sde1
  Logical volume "poolB" created.
[root@host-118 ~]# lvcreate -L 8M -n pool_metaB cache_sanity /dev/sde1
  Logical volume "pool_metaB" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachepolicy cleaner --cachemode writeback --poolmetadata cache_sanity/pool_metaB cache_sanity/poolB
  WARNING: Converting logical volume cache_sanity/poolB and cache_sanity/pool_metaB to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/poolB to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool cache_sanity/poolB cache_sanity/cacheB
  Logical volume cache_sanity/cacheB is now cached.


[root@host-118 ~]# lvs -a -o +devices
  LV              VG           Attr       LSize   Pool   Data%  Meta% Cpy%Sync Devices        
  cacheA          cache_sanity Cwi-a-C---   4.00g [poolA]0.00   8.66  100.00   cacheA_corig(0)
  [cacheA_corig]  cache_sanity owi-aoC---   4.00g                              /dev/sda1(0)   
  cacheB          cache_sanity Cwi-a-C---   4.00g [poolB]0.00   3.47  100.00   cacheB_corig(0)
  [cacheB_corig]  cache_sanity owi-aoC---   4.00g                              /dev/sda1(1027)
  [lvol0_pmspare] cache_sanity ewi-------  12.00m                              /dev/sda1(1024)
  [poolA]         cache_sanity Cwi---C---   4.00g        0.00   8.66  100.00   poolA_cdata(0) 
  [poolA_cdata]   cache_sanity Cwi-ao----   4.00g                              /dev/sdf1(0)   
  [poolA_cmeta]   cache_sanity ewi-ao----  12.00m                              /dev/sdf1(1024)
  [poolB]         cache_sanity Cwi---C---   2.00g        0.00   3.47  100.00   poolB_cdata(0) 
  [poolB_cdata]   cache_sanity Cwi-ao----   2.00g                              /dev/sde1(0)   
  [poolB_cmeta]   cache_sanity ewi-ao----   8.00m                              /dev/sde1(512) 

[root@host-118 ~]# lvs -o name,cache_policy,kernel_cache_settings
  LV      Cache Policy KCache Settings                                                                                                                                       
  cacheA  mq           migration_threshold=2048,random_threshold=4,sequential_threshold=512,discard_promote_adjustment=1,read_promote_adjustment=4,write_promote_adjustment=8
  cacheB  cleaner      migration_threshold=2048                                                                                                                              
[root@host-118 ~]# lvs -o name,cache_policy,cache_settings
  LV      Cache Policy Cache Settings
  cacheA  mq                         
  cacheB  cleaner                    

[root@host-118 ~]# lvchange --cachesettings discard_promote_adjustment=0 cache_sanity/cacheA cache_sanity/cacheB
  Logical volume "cacheA" changed.
  device-mapper: reload ioctl on (253:6) failed: Invalid argument
  Failed to lock logical volume cache_sanity/cacheB.

[root@host-118 ~]# lvchange --cachesettings sequential_threshold=1024 cache_sanity/cacheA
  Logical volume "cacheA" changed.
[root@host-118 ~]# lvchange --cachesettings sequential_threshold=1024 cache_sanity/cacheB
  device-mapper: reload ioctl on (253:6) failed: Invalid argument
  Failed to lock logical volume cache_sanity/cacheB.

[root@host-118 ~]# lvchange --cachesettings migration_threshold=4096 cache_sanity/cacheA
  Logical volume "cacheA" changed.
[root@host-118 ~]# lvchange --cachesettings migration_threshold=4096 cache_sanity/cacheB
  Logical volume "cacheB" changed.

[root@host-118 ~]# lvs -o name,cache_policy,kernel_cache_settings
  LV      Cache Policy KCache Settings                                                                                                                                        
  cacheA  mq           migration_threshold=4096,random_threshold=4,sequential_threshold=1024,discard_promote_adjustment=0,read_promote_adjustment=4,write_promote_adjustment=8
  cacheB  cleaner      migration_threshold=4096                                                                                                                               
[root@host-118 ~]# lvs -o name,cache_policy,cache_settings
  LV      Cache Policy Cache Settings                                                                 
  cacheA  mq           migration_threshold=4096,discard_promote_adjustment=0,sequential_threshold=1024
  cacheB  cleaner      migration_threshold=4096

Comment 10 errata-xmlrpc 2016-05-11 01:16:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-0964.html


Note You need to log in before you can comment on or make changes to this bug.