Bug 1754743 - Enabling LV cache along with VDO volumes fails during Deployment
Summary: Enabling LV cache along with VDO volumes fails during Deployment
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.6
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: RHHI-V 1.8
Assignee: Parth Dhanjal
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1754748
Blocks: RHHI-V-1.8-Engineering-Backlog-BZs
TreeView+ depends on / blocked
 
Reported: 2019-09-24 03:22 UTC by bipin
Modified: 2020-08-04 14:51 UTC (History)
6 users (show)

Fixed In Version: cockpit-ovirt-0.14.4-1.el8ev
Doc Type: Bug Fix
Doc Text:
Previously, configuring volumes that used both virtual disk optimization (VDO) and a cache volume caused deployment in the web console to fail. This occurred because the underlying volume path was specified in the form "/dev/sdx" instead of the form "/dev/mapper/vdo_sdx". VDO volumes are now specified using the correct form and deployment no longer fails.
Clone Of:
: 1754748 (view as bug list)
Environment:
Last Closed: 2020-08-04 14:50:58 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2020:3314 0 None None None 2020-08-04 14:51:25 UTC

Description bipin 2019-09-24 03:22:50 UTC
Description of problem:
======================
While deploying the LV cache, it uses the existing thinpool device to attach. If that thinpool device uses VDO then the cache disk name should be changed to "/dev/mapper/vdo_<disk> rather "/dev/<disk>".
Else this fails in the deployment

EX:
===
gluster_infra_cache_vars:
        - vgname: gluster_vg_sdb
          cachedisk: '/dev/sdb,/dev/sdc' --------------> here the name should be  /dev/mapper/vdo_sdb
          cachelvname: cachelv_gluster_thinpool_gluster_vg_sdb
          cachethinpoolname: gluster_thinpool_gluster_vg_sdb
          cachelvsize: 1G
          cachemode: writethrough


Version-Release number of selected component (if applicable):
============================================================


How reproducible:
=================
Always

Steps to Reproduce:
==================
1.Start Gluster Deployment
2.Use VDO enabled volumes and use the same thinpool device for LV cache


Actual results:
==============
Fails during deployment

Expected results:
=================
Should be using the name as mentioned above


Additional info:

Comment 1 Anjana KD 2019-09-24 07:17:40 UTC
kindly provide the Doc text and Doc type.

Comment 3 SATHEESARAN 2020-03-30 12:04:16 UTC
Tested with cockpit-ovirt-dashboard-0.14.3

1. selected the option to enable lvmcache, with the thinpool created on top of VDO volume

But still the device for the thinpool, never had /dev/mapper/vdo_sdx

<snip>
      gluster_infra_volume_groups:
        - vgname: gluster_vg_sdb
          pvname: /dev/sdb
        - vgname: gluster_vg_sdc
          pvname: /dev/mapper/vdo_sdc              <----------vdo_sdc is created on top of sdc
        - vgname: gluster_vg_sdd
          pvname: /dev/sdd
</snip>

<snip>
      gluster_infra_cache_vars:
        - vgname: gluster_vg_sdd
          cachedisk: '/dev/sdc,/dev/sdg'                  <------------------ expected = '/dev/mapper/vdo_sdc,/dev/sdg'
          cachelvname: cachelv_gluster_thinpool_gluster_vg_sdd
          cachethinpoolname: gluster_thinpool_gluster_vg_sdd
          cachelvsize: 220G
          cachemode: writethrough
</snip>

Comment 4 SATHEESARAN 2020-04-15 11:18:01 UTC
Tested with cockpit-ovirt-dashboard-0.14.4 with various scenarios:

1. No dedupe and compression enabled bricks

     - 3 volumes, 3 bricks on the host are on the same disk, no bricks with VDO and creating lvmcache with SSD disk /dev/sdg
     - 4 volumes, 4 bricks on the host are on different disk, creating lvmcache with SSD disk /dev/sdg

2. Dedupe and compression enabled bricks
     - 3 volumes, 3 bricks on the host are on the different disk with VDO enabled on all the non-engine volumes

3. Mix of dedupe and compression disabled and enabled bricks
     - 4 volumes, 1 brick on sdb, 2nd and 3rd bricks has dedupe and compression enabled and are on same disk /dev/sdc, 1 brick on sdd without

Also testing was done with asymmetric bricks. (i.e) lvmcache attached to different bricks on different host.

Verified with the above tests

Comment 9 errata-xmlrpc 2020-08-04 14:50:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (RHHI for Virtualization 1.8 bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2020:3314


Note You need to log in before you can comment on or make changes to this bug.