Bug 1312129 - need a better error when attempting to change cache setting that isn't currently set kernel
need a better error when attempting to change cache setting that isn't curren...
Status: NEW
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: lvm2 (Show other bugs)
6.8
x86_64 Linux
unspecified Severity low
: rc
: ---
Assigned To: Zdenek Kabelac
cluster-qe@redhat.com
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-25 15:09 EST by Corey Marthaler
Modified: 2017-09-14 07:41 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2016-02-25 15:09:36 EST
Description of problem:
[root@host-118 ~]#  lvcreate -L 4G -n cacheA cache_sanity /dev/sda1
  Logical volume "cacheA" created.
[root@host-118 ~]# lvcreate -L 4G -n poolA cache_sanity /dev/sdf1
  Logical volume "poolA" created.
[root@host-118 ~]# lvcreate -L 12M -n pool_metaA cache_sanity /dev/sdf1
  Logical volume "pool_metaA" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachepolicy mq --cachemode writethrough -c 32 --poolmetadata cache_sanity/pool_metaA cache_sanity/poolA
  WARNING: Converting logical volume cache_sanity/poolA and cache_sanity/pool_metaA to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/poolA to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool cache_sanity/poolA cache_sanity/cacheA
  Logical volume cache_sanity/cacheA is now cached.


[root@host-118 ~]# lvcreate -L 4G -n cacheB cache_sanity /dev/sda1
  Logical volume "cacheB" created.
[root@host-118 ~]# lvcreate -L 2G -n poolB cache_sanity /dev/sde1
  Logical volume "poolB" created.
[root@host-118 ~]# lvcreate -L 8M -n pool_metaB cache_sanity /dev/sde1
  Logical volume "pool_metaB" created.
[root@host-118 ~]# lvconvert --yes --type cache-pool --cachepolicy cleaner --cachemode writeback --poolmetadata cache_sanity/pool_metaB cache_sanity/poolB
  WARNING: Converting logical volume cache_sanity/poolB and cache_sanity/pool_metaB to pool's data and metadata volumes.
  THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.)
  Converted cache_sanity/poolB to cache pool.
[root@host-118 ~]# lvconvert --yes --type cache --cachepool cache_sanity/poolB cache_sanity/cacheB
  Logical volume cache_sanity/cacheB is now cached.


[root@host-118 ~]# lvs -a -o +devices
  LV              VG           Attr       LSize   Pool   Data%  Meta% Cpy%Sync Devices        
  cacheA          cache_sanity Cwi-a-C---   4.00g [poolA]0.00   8.66  100.00   cacheA_corig(0)
  [cacheA_corig]  cache_sanity owi-aoC---   4.00g                              /dev/sda1(0)   
  cacheB          cache_sanity Cwi-a-C---   4.00g [poolB]0.00   3.47  100.00   cacheB_corig(0)
  [cacheB_corig]  cache_sanity owi-aoC---   4.00g                              /dev/sda1(1027)
  [lvol0_pmspare] cache_sanity ewi-------  12.00m                              /dev/sda1(1024)
  [poolA]         cache_sanity Cwi---C---   4.00g        0.00   8.66  100.00   poolA_cdata(0) 
  [poolA_cdata]   cache_sanity Cwi-ao----   4.00g                              /dev/sdf1(0)   
  [poolA_cmeta]   cache_sanity ewi-ao----  12.00m                              /dev/sdf1(1024)
  [poolB]         cache_sanity Cwi---C---   2.00g        0.00   3.47  100.00   poolB_cdata(0) 
  [poolB_cdata]   cache_sanity Cwi-ao----   2.00g                              /dev/sde1(0)   
  [poolB_cmeta]   cache_sanity ewi-ao----   8.00m                              /dev/sde1(512) 



[root@host-118 ~]# lvs -o name,cache_policy,kernel_cache_settings
  LV      Cache Policy KCache Settings                                                                                                                                       
  cacheA  mq           migration_threshold=2048,random_threshold=4,sequential_threshold=512,discard_promote_adjustment=1,read_promote_adjustment=4,write_promote_adjustment=8
  cacheB  cleaner      migration_threshold=2048                                                                                                                              
[root@host-118 ~]# lvs -o name,cache_policy,cache_settings
  LV      Cache Policy Cache Settings
  cacheA  mq                         
  cacheB  cleaner                    

[root@host-118 ~]# lvchange --cachesettings discard_promote_adjustment=0 cache_sanity/cacheA cache_sanity/cacheB
  Logical volume "cacheA" changed.
  device-mapper: reload ioctl on (253:6) failed: Invalid argument
  Failed to lock logical volume cache_sanity/cacheB.

[root@host-118 ~]# lvchange --cachesettings sequential_threshold=1024 cache_sanity/cacheA
  Logical volume "cacheA" changed.
[root@host-118 ~]# lvchange --cachesettings sequential_threshold=1024 cache_sanity/cacheB
  device-mapper: reload ioctl on (253:6) failed: Invalid argument
  Failed to lock logical volume cache_sanity/cacheB.

Version-Release number of selected component (if applicable):
2.6.32-616.el6.x86_64

lvm2-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-libs-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
lvm2-cluster-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
udev-147-2.71.el6    BUILT: Wed Feb 10 07:07:17 CST 2016
device-mapper-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-libs-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-event-libs-1.02.117-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
device-mapper-persistent-data-0.6.2-0.1.rc5.el6    BUILT: Wed Feb 24 07:07:09 CST 2016
cmirror-2.02.143-1.el6    BUILT: Wed Feb 24 07:59:50 CST 2016
Comment 2 Zdenek Kabelac 2016-02-26 07:22:58 EST
Hmm

--cachesettings takes arbitrary list of parameters.

There is no validation from lvm2 side - the reason behind is we support 'non-upstream' cache policies so we cannot know what params such modules would have take.

On the other hand we my possibly introduce some more checks for existing already upstreamed policies.

So not yet sure what we will do with this BZ.

Note You need to log in before you can comment on or make changes to this bug.