Hide Forgot
Description of problem: The only way I can reliably reproduce this it appears, is to attempt the same cache create and change operations twice by hand. I haven't been able to reproduce this yet with a script. [root@host-126 ~]# vgcreate cache_sanity /dev/sd[abcdefgh]1 Volume group "cache_sanity" successfully created [root@host-126 ~]# lvcreate -L 4G -n corigin cache_sanity Logical volume "corigin" created. [root@host-126 ~]# lvcreate --yes --type cache-pool -L 500M cache_sanity/cpool Using default stripesize 64.00 KiB. Logical volume "cpool" created. [root@host-126 ~]# lvconvert --yes --type cache --cachepool cache_sanity/cpool cache_sanity/corigin Logical volume cache_sanity/corigin is now cached. [root@host-126 ~]# lvchange --cachepolicy mq cache_sanity/corigin Logical volume cache_sanity/corigin changed. [root@host-126 ~]# lvs -a -o +devices,cachepolicy,cachemode LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices CachePolicy CacheMode corigin cache_sanity Cwi-a-C--- 4.00g [cpool] [corigin_corig] 0.00 1.56 0.00 corigin_corig(0) mq writethrough [corigin_corig] cache_sanity owi-aoC--- 4.00g /dev/sda1(0) [cpool] cache_sanity Cwi---C--- 500.00m 0.00 1.56 0.00 cpool_cdata(0) mq writethrough [cpool_cdata] cache_sanity Cwi-ao---- 500.00m /dev/sda1(1028) [cpool_cmeta] cache_sanity ewi-ao---- 8.00m /dev/sda1(1026) [lvol0_pmspare] cache_sanity ewi------- 8.00m /dev/sda1(1024) [root@host-126 ~]# lvremove -f cache_sanity Logical volume "cpool" successfully removed Logical volume "corigin" successfully removed [root@host-126 ~]# dmsetup ls ### Repeat same above steps again... [root@host-126 ~]# lvcreate -L 4G -n corigin cache_sanity Logical volume "corigin" created. [root@host-126 ~]# lvcreate --yes --type cache-pool -L 500M cache_sanity/cpool Using default stripesize 64.00 KiB. Logical volume "cpool" created. [root@host-126 ~]# lvconvert --yes --type cache --cachepool cache_sanity/cpool cache_sanity/corigin Logical volume cache_sanity/corigin is now cached. [root@host-126 ~]# lvs -a -o +devices,cachemode,cachepolicy LV VG Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices CacheMode CachePolicy corigin cache_sanity Cwi-a-C--- 4.00g [cpool] [corigin_corig] 0.00 1.07 0.00 corigin_corig(0) writethrough smq [corigin_corig] cache_sanity owi-aoC--- 4.00g /dev/sda1(0) [cpool] cache_sanity Cwi---C--- 500.00m 0.00 1.07 0.00 cpool_cdata(0) writethrough smq [cpool_cdata] cache_sanity Cwi-ao---- 500.00m /dev/sda1(1028) [cpool_cmeta] cache_sanity ewi-ao---- 8.00m /dev/sda1(1026) [lvol0_pmspare] cache_sanity ewi------- 8.00m /dev/sda1(1024) [root@host-126 ~]# dmsetup table cache_sanity-cpool_cmeta: 0 16384 linear 8:1 8407040 cache_sanity-corigin: 0 8388608 cache 253:4 253:3 253:5 128 1 writethrough smq 0 cache_sanity-corigin_corig: 0 8388608 linear 8:1 2048 cache_sanity-cpool_cdata: 0 1024000 linear 8:1 8423424 [root@host-126 ~]# lvchange --cachepolicy mp cache_sanity/corigin device-mapper: reload ioctl on (253:2) failed: Invalid argument Failed to lock logical volume cache_sanity/corigin. Oct 7 17:57:09 host-126 kernel: device-mapper: cache-policy: unknown policy type Oct 7 17:57:09 host-126 kernel: device-mapper: table: 253:2: cache: Error creating cache's policy Oct 7 17:57:09 host-126 kernel: device-mapper: ioctl: error adding target to table Version-Release number of selected component (if applicable): 3.10.0-510.el7.x86_64 lvm2-2.02.166-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 lvm2-libs-2.02.166-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 lvm2-cluster-2.02.166-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-libs-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-event-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-event-libs-1.02.135-1.el7 BUILT: Wed Sep 28 02:26:52 CDT 2016 device-mapper-persistent-data-0.6.3-1.el7 BUILT: Fri Jul 22 05:29:13 CDT 2016 How reproducible: Often - yet not reliably
Created attachment 1208278 [details] verbose lvchange attempt
Have you wanted to use 'mq' policy instead of 'mp' ? lvm2 has no control over allowed policy. We do (intentionally) support any 'random' string - since user may eventually build their own kernel modules to implement different kind of cache policy. (Same applies to cache_settings - again 'free string without validation') So lvm2 has no way to figure out if the policy is there unless it tries to use. ATM we let it run completely to the ioctl. We can possibly implement test ahead to check presence of module - but since it's valid to alias policy A with some other policy B - it's not straightforward as well (i.e. mq is implemented as 'smq' on recent kernels) AFAIK there is no kernel query yet known to me - to list available supported policies. So yep - error message looks a bit ugly and cryptic, but it's reasonably correct and no damage to lvm2 metadata happens as it's discovered before commit. So the bug should rather be converted to some RFE for enhanced validation of usable policies before ask kernel for impossible thing to do.
Zdenek is correct, this is about the error given when providing an invalid policy. [root@host-113 ~]# lvcreate -L 4G -n corigin cache_sanity Logical volume "corigin" created. [root@host-113 ~]# lvcreate --yes --type cache-pool -L 500M cache_sanity/cpool Using default stripesize 64.00 KiB. Logical volume "cpool" created. [root@host-113 ~]# lvconvert --yes --type cache --cachepool cache_sanity/cpool cache_sanity/corigin Logical volume cache_sanity/corigin is now cached. [root@host-113 ~]# lvs -a -o +devices,cachemode,cachepolicy LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices CacheMode CachePolicy corigin Cwi-a-C--- 4.00g [cpool] [corigin_corig] 0.00 1.07 0.00 corigin_corig(0) writethrough smq [corigin_corig] owi-aoC--- 4.00g /dev/sda1(0) [cpool] Cwi---C--- 500.00m 0.00 1.07 0.00 cpool_cdata(0) writethrough smq [cpool_cdata] Cwi-ao---- 500.00m /dev/sda1(1028) [cpool_cmeta] ewi-ao---- 8.00m /dev/sda1(1026) [lvol0_pmspare] ewi------- 8.00m /dev/sda1(1024) [root@host-113 ~]# dmsetup table cache_sanity-cpool_cmeta: 0 16384 linear 8:1 8407040 cache_sanity-corigin: 0 8388608 cache 253:4 253:3 253:5 128 1 writethrough smq 0 cache_sanity-corigin_corig: 0 8388608 linear 8:1 2048 cache_sanity-cpool_cdata: 0 1024000 linear 8:1 8423424 # MP fails, it's an invalid option [root@host-113 ~]# lvchange --cachepolicy mp cache_sanity/corigin device-mapper: reload ioctl on (253:2) failed: Invalid argument Failed to lock logical volume cache_sanity/corigin. # MQ works fine [root@host-113 ~]# lvchange --cachepolicy mq cache_sanity/corigin WARNING: Reading VG cache_sanity from disk because lvmetad metadata is invalid. Logical volume cache_sanity/corigin changed
At this moment code does it's best. Command 'tries' to activate - and since activation may happen on 'remote' node lvm2 doesn't provide yet mechanism to back-track real reason of activation failure. So the best we can to is to report 'locking' failure (aka activate failure). The enhancement in this would need larger code changes so surely not a 7.4 topic.
Development Management has reviewed and declined this request. You may appeal this decision by reopening this request.