Bug 1754743

Summary: Enabling LV cache along with VDO volumes fails during Deployment
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: bipin <bshetty>
Component: rhhiAssignee: Parth Dhanjal <dparth>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: medium    
Version: rhhiv-1.6CC: akrishna, dparth, godas, rhs-bugs, sabose, sasundar
Target Milestone: ---   
Target Release: RHHI-V 1.8   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: cockpit-ovirt-0.14.4-1.el8ev Doc Type: Bug Fix
Doc Text:
Previously, configuring volumes that used both virtual disk optimization (VDO) and a cache volume caused deployment in the web console to fail. This occurred because the underlying volume path was specified in the form "/dev/sdx" instead of the form "/dev/mapper/vdo_sdx". VDO volumes are now specified using the correct form and deployment no longer fails.
Story Points: ---
Clone Of:
: 1754748 (view as bug list) Environment:
Last Closed: 2020-08-04 14:50:58 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1754748    
Bug Blocks: 1779975    

Description bipin 2019-09-24 03:22:50 UTC
Description of problem:
======================
While deploying the LV cache, it uses the existing thinpool device to attach. If that thinpool device uses VDO then the cache disk name should be changed to "/dev/mapper/vdo_<disk> rather "/dev/<disk>".
Else this fails in the deployment

EX:
===
gluster_infra_cache_vars:
        - vgname: gluster_vg_sdb
          cachedisk: '/dev/sdb,/dev/sdc' --------------> here the name should be  /dev/mapper/vdo_sdb
          cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
          cachethinpoolname: gluster_thinpool_gluster_vg_sdb
          cachelvsize: 1G
          cachemode: writethrough


Version-Release number of selected component (if applicable):
============================================================


How reproducible:
=================
Always

Steps to Reproduce:
==================
1.Start Gluster Deployment
2.Use VDO enabled volumes and use the same thinpool device for LV cache


Actual results:
==============
Fails during deployment

Expected results:
=================
Should be using the name as mentioned above


Additional info:

Comment 1 Anjana KD 2019-09-24 07:17:40 UTC
kindly provide the Doc text and Doc type.

Comment 3 SATHEESARAN 2020-03-30 12:04:16 UTC
Tested with cockpit-ovirt-dashboard-0.14.3

1. selected the option to enable lvmcache, with the thinpool created on top of VDO volume

But still the device for the thinpool, never had /dev/mapper/vdo_sdx

<snip>
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sdb
          pvname: /dev/sdb
        - vgname: gluster_vg_sdc
          pvname: /dev/mapper/vdo_sdc              <----------vdo_sdc is created on top of sdc
        - vgname: gluster_vg_sdd
          pvname: /dev/sdd
</snip>

<snip>
      gluster_infra_cache_vars:
        - vgname: gluster_vg_sdd
          cachedisk: '/dev/sdc,/dev/sdg'                  <------------------ expected = '/dev/mapper/vdo_sdc,/dev/sdg'
          cachelvname: cachelv_gluster_thinpool_gluster_vg_sdd
          cachethinpoolname: gluster_thinpool_gluster_vg_sdd
          cachelvsize: 220G
          cachemode: writethrough
</snip>

Comment 4 SATHEESARAN 2020-04-15 11:18:01 UTC
Tested with cockpit-ovirt-dashboard-0.14.4 with various scenarios:

1. No dedupe and compression enabled bricks

     - 3 volumes, 3 bricks on the host are on the same disk, no bricks with VDO and creating lvmcache with SSD disk /dev/sdg
     - 4 volumes, 4 bricks on the host are on different disk, creating lvmcache with SSD disk /dev/sdg

2. Dedupe and compression enabled bricks
     - 3 volumes, 3 bricks on the host are on the different disk with VDO enabled on all the non-engine volumes

3. Mix of dedupe and compression disabled and enabled bricks
     - 4 volumes, 1 brick on sdb, 2nd and 3rd bricks has dedupe and compression enabled and are on same disk /dev/sdc, 1 brick on sdd without

Also testing was done with asymmetric bricks. (i.e) lvmcache attached to different bricks on different host.

Verified with the above tests

Comment 9 errata-xmlrpc 2020-08-04 14:50:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314