| Summary: | "Failed to lock logical volume" error when attempting to change to an invalid cachepolicy | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Corey Marthaler <cmarthal> | ||||
| Component: | lvm2 | Assignee: | Zdenek Kabelac <zkabelac> | ||||
| lvm2 sub component: | Cache Logical Volumes | QA Contact: | cluster-qe <cluster-qe> | ||||
| Status: | CLOSED WONTFIX | Docs Contact: | |||||
| Severity: | low | ||||||
| Priority: | unspecified | CC: | agk, cmarthal, heinzm, jbrassow, msnitzer, prajnoha, zkabelac | ||||
| Version: | 7.3 | ||||||
| Target Milestone: | rc | ||||||
| Target Release: | --- | ||||||
| Hardware: | x86_64 | ||||||
| OS: | Linux | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2017-01-22 20:22:52 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Attachments: |
|
||||||
|
Description
Corey Marthaler
2016-10-07 23:11:47 UTC
Created attachment 1208278 [details]
verbose lvchange attempt
Have you wanted to use 'mq' policy instead of 'mp' ? lvm2 has no control over allowed policy. We do (intentionally) support any 'random' string - since user may eventually build their own kernel modules to implement different kind of cache policy. (Same applies to cache_settings - again 'free string without validation') So lvm2 has no way to figure out if the policy is there unless it tries to use. ATM we let it run completely to the ioctl. We can possibly implement test ahead to check presence of module - but since it's valid to alias policy A with some other policy B - it's not straightforward as well (i.e. mq is implemented as 'smq' on recent kernels) AFAIK there is no kernel query yet known to me - to list available supported policies. So yep - error message looks a bit ugly and cryptic, but it's reasonably correct and no damage to lvm2 metadata happens as it's discovered before commit. So the bug should rather be converted to some RFE for enhanced validation of usable policies before ask kernel for impossible thing to do. Zdenek is correct, this is about the error given when providing an invalid policy. [root@host-113 ~]# lvcreate -L 4G -n corigin cache_sanity Logical volume "corigin" created. [root@host-113 ~]# lvcreate --yes --type cache-pool -L 500M cache_sanity/cpool Using default stripesize 64.00 KiB. Logical volume "cpool" created. [root@host-113 ~]# lvconvert --yes --type cache --cachepool cache_sanity/cpool cache_sanity/corigin Logical volume cache_sanity/corigin is now cached. [root@host-113 ~]# lvs -a -o +devices,cachemode,cachepolicy LV Attr LSize Pool Origin Data% Meta% Cpy%Sync Devices CacheMode CachePolicy corigin Cwi-a-C--- 4.00g [cpool] [corigin_corig] 0.00 1.07 0.00 corigin_corig(0) writethrough smq [corigin_corig] owi-aoC--- 4.00g /dev/sda1(0) [cpool] Cwi---C--- 500.00m 0.00 1.07 0.00 cpool_cdata(0) writethrough smq [cpool_cdata] Cwi-ao---- 500.00m /dev/sda1(1028) [cpool_cmeta] ewi-ao---- 8.00m /dev/sda1(1026) [lvol0_pmspare] ewi------- 8.00m /dev/sda1(1024) [root@host-113 ~]# dmsetup table cache_sanity-cpool_cmeta: 0 16384 linear 8:1 8407040 cache_sanity-corigin: 0 8388608 cache 253:4 253:3 253:5 128 1 writethrough smq 0 cache_sanity-corigin_corig: 0 8388608 linear 8:1 2048 cache_sanity-cpool_cdata: 0 1024000 linear 8:1 8423424 # MP fails, it's an invalid option [root@host-113 ~]# lvchange --cachepolicy mp cache_sanity/corigin device-mapper: reload ioctl on (253:2) failed: Invalid argument Failed to lock logical volume cache_sanity/corigin. # MQ works fine [root@host-113 ~]# lvchange --cachepolicy mq cache_sanity/corigin WARNING: Reading VG cache_sanity from disk because lvmetad metadata is invalid. Logical volume cache_sanity/corigin changed At this moment code does it's best. Command 'tries' to activate - and since activation may happen on 'remote' node lvm2 doesn't provide yet mechanism to back-track real reason of activation failure. So the best we can to is to report 'locking' failure (aka activate failure). The enhancement in this would need larger code changes so surely not a 7.4 topic. Development Management has reviewed and declined this request. You may appeal this decision by reopening this request. |