Bug 1754748 - Enabling LV cache along with VDO volumes fails during Deployment
Summary: Enabling LV cache along with VDO volumes fails during Deployment
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: cockpit-ovirt
Classification: oVirt
Component: gluster-ansible
Version: 0.13.8
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ovirt-4.4.0
: 0.14.4
Assignee: Parth Dhanjal
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: 1754743
TreeView+ depends on / blocked
 
Reported: 2019-09-24 03:35 UTC by bipin
Modified: 2020-05-20 20:02 UTC (History)
5 users (show)

Fixed In Version: cockpit-ovirt-0.14.4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1754743
Environment:
Last Closed: 2020-05-20 20:02:35 UTC
oVirt Team: Gluster
Embargoed:
sbonazzo: ovirt-4.4?
sbonazzo: planning_ack?
sbonazzo: devel_ack+
sasundar: testing_ack+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 106261 0 master MERGED Enabling LV Cache along with VDO volumes 2020-04-15 10:17:41 UTC
oVirt gerrit 107512 0 master MERGED drop ovirt-engine-yarn dependency 2020-04-15 10:17:41 UTC
oVirt gerrit 108141 0 master MERGED Enabling mulitple VDo along with LV Cache 2020-04-15 10:17:41 UTC

Description bipin 2019-09-24 03:35:31 UTC
+++ This bug was initially created as a clone of Bug #1754743 +++

Description of problem:
======================
While deploying the LV cache, it uses the existing thinpool device to attach. If that thinpool device uses VDO then the cache disk name should be changed to "/dev/mapper/vdo_<disk> rather "/dev/<disk>".
Else this fails in the deployment

EX:
===
gluster_infra_cache_vars:
        - vgname: gluster_vg_sdb
          cachedisk: '/dev/sdb,/dev/sdc' --------------> here the name should be  /dev/mapper/vdo_sdb
          cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
          cachethinpoolname: gluster_thinpool_gluster_vg_sdb
          cachelvsize: 1G
          cachemode: writethrough


Version-Release number of selected component (if applicable):
============================================================


How reproducible:
=================
Always

Steps to Reproduce:
==================
1.Start Gluster Deployment
2.Use VDO enabled volumes and use the same thinpool device for LV cache


Actual results:
==============
Fails during deployment

Expected results:
=================
Should be using the name as mentioned above


Additional info:

Comment 1 Sahina Bose 2019-11-14 06:22:00 UTC
Do we plan to fix this?

Comment 2 Parth Dhanjal 2019-11-14 08:27:32 UTC
(In reply to Sahina Bose from comment #1)
> Do we plan to fix this?

Yes. Will fix it in the coming sprint

Comment 3 SATHEESARAN 2020-03-30 12:03:55 UTC
Tested with cockpit-ovirt-dashboard-0.14.3

1. selected the option to enable lvmcache, with the thinpool created on top of VDO volume

But still the device for the thinpool, never had /dev/mapper/vdo_sdx

<snip>
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sdb
          pvname: /dev/sdb
        - vgname: gluster_vg_sdc
          pvname: /dev/mapper/vdo_sdc              <----------vdo_sdc is created on top of sdc
        - vgname: gluster_vg_sdd
          pvname: /dev/sdd
</snip>

<snip>
      gluster_infra_cache_vars:
        - vgname: gluster_vg_sdd
          cachedisk: '/dev/sdc,/dev/sdg'                  <------------------ expected = '/dev/mapper/vdo_sdc,/dev/sdg'
          cachelvname: cachelv_gluster_thinpool_gluster_vg_sdd
          cachethinpoolname: gluster_thinpool_gluster_vg_sdd
          cachelvsize: 220G
          cachemode: writethrough
</snip>

Comment 4 SATHEESARAN 2020-04-15 11:18:21 UTC
Tested with cockpit-ovirt-dashboard-0.14.4 with various scenarios:

1. No dedupe and compression enabled bricks

     - 3 volumes, 3 bricks on the host are on the same disk, no bricks with VDO and creating lvmcache with SSD disk /dev/sdg
     - 4 volumes, 4 bricks on the host are on different disk, creating lvmcache with SSD disk /dev/sdg

2. Dedupe and compression enabled bricks
     - 3 volumes, 3 bricks on the host are on the different disk with VDO enabled on all the non-engine volumes

3. Mix of dedupe and compression disabled and enabled bricks
     - 4 volumes, 1 brick on sdb, 2nd and 3rd bricks has dedupe and compression enabled and are on same disk /dev/sdc, 1 brick on sdd without

Also testing was done with asymmetric bricks. (i.e) lvmcache attached to different bricks on different host.

Verified with the above tests

Comment 5 Sandro Bonazzola 2020-05-20 20:02:35 UTC
This bugzilla is included in oVirt 4.4.0 release, published on May 20th 2020.

Since the problem described in this bug report should be
resolved in oVirt 4.4.0 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.