+++ This bug was initially created as a clone of Bug #1754743 +++ Description of problem: ====================== While deploying the LV cache, it uses the existing thinpool device to attach. If that thinpool device uses VDO then the cache disk name should be changed to "/dev/mapper/vdo_<disk> rather "/dev/<disk>". Else this fails in the deployment EX: === gluster_infra_cache_vars: - vgname: gluster_vg_sdb cachedisk: '/dev/sdb,/dev/sdc' --------------> here the name should be /dev/mapper/vdo_sdb cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb cachethinpoolname: gluster_thinpool_gluster_vg_sdb cachelvsize: 1G cachemode: writethrough Version-Release number of selected component (if applicable): ============================================================ How reproducible: ================= Always Steps to Reproduce: ================== 1.Start Gluster Deployment 2.Use VDO enabled volumes and use the same thinpool device for LV cache Actual results: ============== Fails during deployment Expected results: ================= Should be using the name as mentioned above Additional info:
Do we plan to fix this?
(In reply to Sahina Bose from comment #1) > Do we plan to fix this? Yes. Will fix it in the coming sprint
Tested with cockpit-ovirt-dashboard-0.14.3 1. selected the option to enable lvmcache, with the thinpool created on top of VDO volume But still the device for the thinpool, never had /dev/mapper/vdo_sdx <snip> gluster_infra_volume_groups: - vgname: gluster_vg_sdb pvname: /dev/sdb - vgname: gluster_vg_sdc pvname: /dev/mapper/vdo_sdc <----------vdo_sdc is created on top of sdc - vgname: gluster_vg_sdd pvname: /dev/sdd </snip> <snip> gluster_infra_cache_vars: - vgname: gluster_vg_sdd cachedisk: '/dev/sdc,/dev/sdg' <------------------ expected = '/dev/mapper/vdo_sdc,/dev/sdg' cachelvname: cachelv_gluster_thinpool_gluster_vg_sdd cachethinpoolname: gluster_thinpool_gluster_vg_sdd cachelvsize: 220G cachemode: writethrough </snip>
Tested with cockpit-ovirt-dashboard-0.14.4 with various scenarios: 1. No dedupe and compression enabled bricks - 3 volumes, 3 bricks on the host are on the same disk, no bricks with VDO and creating lvmcache with SSD disk /dev/sdg - 4 volumes, 4 bricks on the host are on different disk, creating lvmcache with SSD disk /dev/sdg 2. Dedupe and compression enabled bricks - 3 volumes, 3 bricks on the host are on the different disk with VDO enabled on all the non-engine volumes 3. Mix of dedupe and compression disabled and enabled bricks - 4 volumes, 1 brick on sdb, 2nd and 3rd bricks has dedupe and compression enabled and are on same disk /dev/sdc, 1 brick on sdd without Also testing was done with asymmetric bricks. (i.e) lvmcache attached to different bricks on different host. Verified with the above tests
This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020. Since the problem described in this bug report should be resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.