Description of problem: ====================== While deploying gluster using gdeploy,currently the lvm cache is supported only on a thinpool volume. Need to support for thickpool lv configuration as well. In the latest deployment, while configuring the lvm cache on a thickpool lv it throws the below error. TASK [Setup SSD for caching | Change the attributes of the logical volume] ***** fatal: [10.70.45.29]: FAILED! => {"msg": "The conditional check 'res.rc != 0 and 'zero new blocks' not in res.msg' failed. The error was: error while evaluating conditional (res.rc != 0 and 'zero new blocks' not in res.msg): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmp3b4PFo/cache_setup.retry Version-Release number of selected component (if applicable): ============================================================ gdeploy-2.0.2-27.el7rhgs.noarch ansible-2.6.1-1.el7ae.noarch How reproducible: ================ 100% Steps to Reproduce: ================== 1.Navigate to the cockpit UI 2.Start the gluster deployment 3.Move towards the brick and check the enable compression and deduplication checkbox 4.The thinpool gets unchecked for that device and enable the lvm cache 5.Proceed towards the deployment and it fails Actual results: ============== Fails with error Expected results: ================ Shouldn't fail Additional info: =============== Additionally tried attaching lvmcache to thick lv within the gdeploy script( Changing the poolname to the lv name) but it failed. Here is the conf file: [hosts] 10.70.37.146 [lv] action=setup-cache ssd=vdd vgname=vg1 poolname=lv1 cache_lv=lvcache cache_lvsize=9GB cachemode=writethrough ignore_lv_errors=no Here is the output: [root@rhsqa-grafton7 ~]# gdeploy -c gdeployConfig.conf PLAY [gluster_servers] *************************************************************************************************************************** TASK [Setup SSD for caching | Create the physical volume] **************************************************************************************** changed: [10.70.37.146] => (item=/dev/vdd) TASK [Setup SSD for caching | Extend the Volume Group] ******************************************************************************************* changed: [10.70.37.146] => (item=/dev/vdd) TASK [Setup SSD for caching | Change the attributes of the logical volume] *********************************************************************** fatal: [10.70.37.146]: FAILED! => {"changed": false, "failed_when_result": true, "msg": " Command on LV vg1/lv1 uses options that require LV types thinpool .\n Command not permitted on LV vg1/lv1.\n", "rc": 5} to retry, use: --limit @/tmp/tmpuYsXsi/cache_setup.retry PLAY RECAP *************************************************************************************************************************************** 10.70.37.146 : ok=2 changed=2 unreachable=0 failed=1
The case differs only in the place where the cachepool LV is attached to the OriginLV. With thinpool - Cachepool LV is attached to the VG/thinpool non-thinpool - Cachepool LV is attached to the VG/origin_lv To enable support for this request, the parameter 'poolname' should be made optional and one more parameter 'origin_lv' should be made available. These 2 parameters 'poolname' & 'origin_lv' should be mutually exclusive, which means only one of them should be available. If this param 'poolname' is available, attach cachepool to VG/thinpool, else look for param 'origin_lv' and attach cache to 'VG/origin_lv' Let me also furnish the steps used to create the lvmcache which could also enabled the understandig. Variables ------------ SSD - /dev/sdc ( say 225G ) HDD - /dev/sdb VG name - gluster_vg_sdb With thinpool ------------- thinpool name - gluster_thinpool_sdb 1. Add the SSD to the VG # vgextend gluster_vg_sdb /dev/sdc 2. Create 'cachelv' # lvcreate -n cachelv -L 220G vg1 /dev/sdc 3. Create 'cachepool' # lvconvert --type cache-pool vg1/cachelv 4. Attach the 'cachepool' to the thinpool # lvconvert --type cache vg1/cachelv vg1/gluster_thinpool_sdb Without thinpool (ie.) with thick LVs ------------------------------------- Let's say one of the thick LV name is 'lv1' 1. Add the SSD to the VG # vgextend gluster_vg_sdb /dev/sdc 2. Create 'cachelv' # lvcreate -n cachelv -L 220G vg1 /dev/sdc 3. Create 'cachepool' # lvconvert --type cache-pool vg1/cachelv 4. Attach the 'cachepool' to the thick LV ( as per requirement ) # lvconvert --type cache vg1/cachelv vg1/lv1
The dependent gdeploy fix is not accepted for the change, as gdeploy is not ready to accept changes in the code. Also we hear that lvmcache + HC, is not really doing perf gain as expected. We should not consider enabling lvmcache for RHHI setup overall. If this fix is highly required, then this should have proper acks for this issue to be fixed, till then this will remain the known issue Known issue ----------- Using cockpit deployment, lvmcache can't be enabled for all thick LV configurations Workaround ----------- In the case of all thick LVs, LV cache could be attached to one of the thick LV with the following steps: Let's say one of the thick LV name is 'gluster_thick_lv1' under the volume group 'gluster_vg_sdb' 1. Add the SSD to the VG # vgextend gluster_vg_sdb /dev/sdc 2. Create 'cachelv' # lvcreate -n cachelv -L 220G gluster_vg_sdb /dev/sdc 3. Create 'cachepool' # lvconvert --type cache-pool gluster_vg_sdb/cachelv 4. Attach the 'cachepool' to the thick LV ( as per requirement ) # lvconvert --type cache gluster_vg_sdb/cachelv gluster_vg_sdb/gluster_thick_lv1
I've run into this while setting up hyperconverged RHHI-V per the documentation at https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/task-config-rhgs-using-cockpit Did I understand correctly that we shouldn't generally configure an LV cache when deploying in a 3 node configuration? Quote: > Also we hear that lvmcache + HC, is not really doing perf gain as expected. > We should not consider enabling lvmcache for RHHI setup overall.
I wondering about the same question as Leif. I did some tests on a RAID5 replica3 cluster with no SSD. Perf was really not good. I tried to follow those steps to add an LV Cache made on ram device but saw no performance increase. Is LV Cache tested and recommended by RedHat? We are budgeting for production cluster and need to make hardware choices soonish. any official guidance there would be great cause this is confusing
(In reply to Guillaume Pavese from comment #4) > I wondering about the same question as Leif. I did some tests on a RAID5 > replica3 cluster with no SSD. Perf was really not good. I tried to follow > those steps to add an LV Cache made on ram device but saw no performance > increase. > > Is LV Cache tested and recommended by RedHat? > > We are budgeting for production cluster and need to make hardware choices > soonish. any official guidance there would be great cause this is confusing LV cache has been tested, however the performance improvements are very workload specific. There's been no noticeable gain that we could see across all workload with the latest lvmcache. I would suggest that you test for your workload before using it in production
This bug is not required, as VDO can now be created on top of thinpool, with the update VDO systemd unit file. Look for solution from the bug - https://bugzilla.redhat.com/show_bug.cgi?id=1600156 Till the fix is in place, this bug will live as a known_issue
With cockpit-ovirt-dashboard-0.13.8-1, VDO is supported with thinpool and there is no thick LV in RHHI-V deployment. So there is no requirement to attach LVM cache to thinpool. With this situation in my mind, closing this bug
(In reply to SATHEESARAN from comment #9) > With cockpit-ovirt-dashboard-0.13.8-1, VDO is supported with thinpool and > there is no thick LV in RHHI-V deployment. > > So there is no requirement to attach LVM cache to thinpool. Correction, there is no requirement to attach LVM cache to thick LVs > > With this situation in my mind, closing this bug