Bug 1608268 - Support lvm cache for thick LV configuration
Summary: Support lvm cache for thick LV configuration
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: RHHI-V 1.7
Assignee: Parth Dhanjal
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1608271 1634682
Blocks: 1548985
TreeView+ depends on / blocked
 
Reported: 2018-07-25 08:45 UTC by bipin
Modified: 2020-02-11 08:21 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
When you attempt to configure a logical volume cache for a thickly provisioned volume using the Cockpit UI, the deployment fails. You can manually configure a logical volume cache after deployment by adding a faster disk to your volume group using the following procedure. Note that device names are examples. 1. Add the new SSD to the volume group. # vgextend gluster_vg_sdb /dev/sdc 2. Create a logical volume from the SSD to use as a cache. # lvcreate -n cachelv -L 220G gluster_vg_sdb /dev/sdc 3. Create a cache pool from the new logical volume. # lvconvert --type cache-pool gluster_vg_sdb/cachelv 4. Attach the cache pool to the thickly provisioned logical volume as a cache volume. # lvconvert --type cache gluster_vg_sdb/cachelv gluster_vg_sdb/gluster_thick_lv1
Clone Of:
: 1608271 (view as bug list)
Environment:
Last Closed: 2019-11-20 09:05:57 UTC
Embargoed:


Attachments (Terms of Use)

Description bipin 2018-07-25 08:45:04 UTC
Description of problem:
======================
While deploying gluster using gdeploy,currently the lvm cache is supported only on a thinpool volume. Need to support for thickpool lv configuration as well.
In the latest deployment, while configuring the lvm cache on a thickpool lv it throws the below error.


TASK [Setup SSD for caching | Change the attributes of the logical volume] *****
fatal: [10.70.45.29]: FAILED! => {"msg": "The conditional check 'res.rc != 0 and 'zero new blocks' not in res.msg' failed. The error was: error while evaluating conditional (res.rc != 0 and 'zero new blocks' not in res.msg): 'dict object' has no attribute 'rc'"}
	to retry, use: --limit @/tmp/tmp3b4PFo/cache_setup.retry



Version-Release number of selected component (if applicable):
============================================================
gdeploy-2.0.2-27.el7rhgs.noarch
ansible-2.6.1-1.el7ae.noarch

How reproducible:
================
100%

Steps to Reproduce:
==================
1.Navigate to the cockpit UI
2.Start the gluster deployment 
3.Move towards the brick and check the enable compression and deduplication checkbox
4.The thinpool gets unchecked for that device and enable the lvm cache
5.Proceed towards the deployment and it fails

Actual results:
==============
Fails with error

Expected results:
================
Shouldn't fail

Additional info:
===============
Additionally tried attaching lvmcache to thick lv within the gdeploy script( Changing the poolname to the lv name) but it failed.
Here is the conf file:
[hosts]
10.70.37.146

[lv]
action=setup-cache
ssd=vdd
vgname=vg1
poolname=lv1
cache_lv=lvcache
cache_lvsize=9GB
cachemode=writethrough
ignore_lv_errors=no

Here is the output:
[root@rhsqa-grafton7 ~]# gdeploy -c gdeployConfig.conf 

PLAY [gluster_servers] ***************************************************************************************************************************

TASK [Setup SSD for caching | Create the physical volume] ****************************************************************************************
changed: [10.70.37.146] => (item=/dev/vdd)

TASK [Setup SSD for caching | Extend the Volume Group] *******************************************************************************************
changed: [10.70.37.146] => (item=/dev/vdd)

TASK [Setup SSD for caching | Change the attributes of the logical volume] ***********************************************************************
fatal: [10.70.37.146]: FAILED! => {"changed": false, "failed_when_result": true, "msg": "  Command on LV vg1/lv1 uses options that require LV types thinpool .\n  Command not permitted on LV vg1/lv1.\n", "rc": 5}
	to retry, use: --limit @/tmp/tmpuYsXsi/cache_setup.retry

PLAY RECAP ***************************************************************************************************************************************
10.70.37.146               : ok=2    changed=2    unreachable=0    failed=1

Comment 1 SATHEESARAN 2018-10-01 10:56:54 UTC
The case differs only in the place where the cachepool LV is attached to the OriginLV. 

With thinpool - Cachepool LV is attached to the VG/thinpool
non-thinpool  - Cachepool LV is attached to the VG/origin_lv

To enable support for this request, the parameter 'poolname' should be made optional and one more parameter 'origin_lv' should be made available. These 2 parameters 'poolname' & 'origin_lv' should be mutually exclusive, which means only one of them should be available.

If this param 'poolname' is available, attach cachepool to VG/thinpool, else look for param 'origin_lv' and attach cache to 'VG/origin_lv'


Let me also furnish the steps used to create the lvmcache which could also enabled the understandig.

Variables
------------
SSD - /dev/sdc ( say 225G )
HDD - /dev/sdb 
VG name - gluster_vg_sdb

With thinpool
-------------
thinpool name - gluster_thinpool_sdb

1. Add the SSD to the VG
# vgextend gluster_vg_sdb /dev/sdc

2. Create 'cachelv'
# lvcreate -n cachelv -L 220G vg1 /dev/sdc

3. Create 'cachepool'
# lvconvert --type cache-pool vg1/cachelv

4. Attach the 'cachepool' to the thinpool
# lvconvert --type cache vg1/cachelv vg1/gluster_thinpool_sdb

Without thinpool (ie.) with thick LVs
-------------------------------------
Let's say one of the thick LV name is 'lv1'

1. Add the SSD to the VG
# vgextend gluster_vg_sdb /dev/sdc

2. Create 'cachelv'
# lvcreate -n cachelv -L 220G vg1 /dev/sdc

3. Create 'cachepool'
# lvconvert --type cache-pool vg1/cachelv

4. Attach the 'cachepool' to the thick LV ( as per requirement )
# lvconvert --type cache vg1/cachelv vg1/lv1

Comment 2 SATHEESARAN 2018-10-24 03:08:57 UTC
The dependent gdeploy fix is not accepted for the change, as gdeploy is not ready to accept changes in the code.

Also we hear that lvmcache + HC, is not really doing perf gain as expected.
We should not consider enabling lvmcache for RHHI setup overall.

If this fix is highly required, then this should have proper acks for this issue to be fixed, till then this will remain the known issue


Known issue
-----------
Using cockpit deployment, lvmcache can't be enabled for all thick LV configurations

Workaround
-----------
In the case of all thick LVs, LV cache could be attached to one of the thick LV with the following steps:

Let's say one of the thick LV name is 'gluster_thick_lv1' under the volume group 'gluster_vg_sdb'

1. Add the SSD to the VG
# vgextend gluster_vg_sdb /dev/sdc

2. Create 'cachelv'
# lvcreate -n cachelv -L 220G gluster_vg_sdb /dev/sdc

3. Create 'cachepool'
# lvconvert --type cache-pool gluster_vg_sdb/cachelv

4. Attach the 'cachepool' to the thick LV ( as per requirement )
# lvconvert --type cache gluster_vg_sdb/cachelv gluster_vg_sdb/gluster_thick_lv1

Comment 3 Leif Madsen 2019-02-08 00:45:36 UTC
I've run into this while setting up hyperconverged RHHI-V per the documentation at https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/task-config-rhgs-using-cockpit

Did I understand correctly that we shouldn't generally configure an LV cache when deploying in a 3 node configuration?

Quote:

> Also we hear that lvmcache + HC, is not really doing perf gain as expected.
> We should not consider enabling lvmcache for RHHI setup overall.

Comment 4 Guillaume Pavese 2019-02-27 14:40:23 UTC
I wondering about the same question as Leif. I did some tests on a RAID5 replica3 cluster with no SSD. Perf was really not good. I tried to follow those steps to add an LV Cache made on ram device but saw no performance increase.

Is LV Cache tested and recommended by RedHat?

We are budgeting for production cluster and need to make hardware choices soonish. any official guidance there would be great cause this is confusing

Comment 5 Sahina Bose 2019-03-14 13:27:37 UTC
(In reply to Guillaume Pavese from comment #4)
> I wondering about the same question as Leif. I did some tests on a RAID5
> replica3 cluster with no SSD. Perf was really not good. I tried to follow
> those steps to add an LV Cache made on ram device but saw no performance
> increase.
> 
> Is LV Cache tested and recommended by RedHat?
> 
> We are budgeting for production cluster and need to make hardware choices
> soonish. any official guidance there would be great cause this is confusing

LV cache has been tested, however the performance improvements are very workload specific. There's been no noticeable gain that we could see across all workload with the latest lvmcache. I would suggest that you test for your workload before using it in production

Comment 6 SATHEESARAN 2019-03-27 08:00:53 UTC
This bug is not required, as VDO can now be created on top of thinpool, with the update VDO systemd unit file.
Look for solution from the bug - https://bugzilla.redhat.com/show_bug.cgi?id=1600156

Till the fix is in place, this bug will live as a known_issue

Comment 9 SATHEESARAN 2019-11-20 09:05:57 UTC
With cockpit-ovirt-dashboard-0.13.8-1, VDO is supported with thinpool and there is no thick LV in RHHI-V deployment.

So there is no requirement to attach LVM cache to thinpool.

With this situation in my mind, closing this bug

Comment 10 SATHEESARAN 2020-02-11 08:21:22 UTC
(In reply to SATHEESARAN from comment #9)
> With cockpit-ovirt-dashboard-0.13.8-1, VDO is supported with thinpool and
> there is no thick LV in RHHI-V deployment.
> 
> So there is no requirement to attach LVM cache to thinpool.
Correction, there is no requirement to attach LVM cache to thick LVs
> 
> With this situation in my mind, closing this bug


Note You need to log in before you can comment on or make changes to this bug.