Bug 1608268
Summary: | Support lvm cache for thick LV configuration | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | bipin <bshetty> | |
Component: | rhhi | Assignee: | Parth Dhanjal <dparth> | |
Status: | CLOSED DEFERRED | QA Contact: | SATHEESARAN <sasundar> | |
Severity: | high | Docs Contact: | ||
Priority: | medium | |||
Version: | rhhiv-1.5 | CC: | godas, guillaume.pavese, jcoscia, lmadsen, pasik, rhs-bugs, sabose, seamurph | |
Target Milestone: | --- | |||
Target Release: | RHHI-V 1.7 | |||
Hardware: | x86_64 | |||
OS: | Linux | |||
Whiteboard: | ||||
Fixed In Version: | Doc Type: | Known Issue | ||
Doc Text: |
When you attempt to configure a logical volume cache for a thickly provisioned volume using the Cockpit UI, the deployment fails. You can manually configure a logical volume cache after deployment by adding a faster disk to your volume group using the following procedure. Note that device names are examples.
1. Add the new SSD to the volume group.
# vgextend gluster_vg_sdb /dev/sdc
2. Create a logical volume from the SSD to use as a cache.
# lvcreate -n cachelv -L 220G gluster_vg_sdb /dev/sdc
3. Create a cache pool from the new logical volume.
# lvconvert --type cache-pool gluster_vg_sdb/cachelv
4. Attach the cache pool to the thickly provisioned logical volume as a cache volume.
# lvconvert --type cache gluster_vg_sdb/cachelv gluster_vg_sdb/gluster_thick_lv1
|
Story Points: | --- | |
Clone Of: | ||||
: | 1608271 (view as bug list) | Environment: | ||
Last Closed: | 2019-11-20 09:05:57 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 1608271, 1634682 | |||
Bug Blocks: | 1548985 |
Description
bipin
2018-07-25 08:45:04 UTC
The case differs only in the place where the cachepool LV is attached to the OriginLV. With thinpool - Cachepool LV is attached to the VG/thinpool non-thinpool - Cachepool LV is attached to the VG/origin_lv To enable support for this request, the parameter 'poolname' should be made optional and one more parameter 'origin_lv' should be made available. These 2 parameters 'poolname' & 'origin_lv' should be mutually exclusive, which means only one of them should be available. If this param 'poolname' is available, attach cachepool to VG/thinpool, else look for param 'origin_lv' and attach cache to 'VG/origin_lv' Let me also furnish the steps used to create the lvmcache which could also enabled the understandig. Variables ------------ SSD - /dev/sdc ( say 225G ) HDD - /dev/sdb VG name - gluster_vg_sdb With thinpool ------------- thinpool name - gluster_thinpool_sdb 1. Add the SSD to the VG # vgextend gluster_vg_sdb /dev/sdc 2. Create 'cachelv' # lvcreate -n cachelv -L 220G vg1 /dev/sdc 3. Create 'cachepool' # lvconvert --type cache-pool vg1/cachelv 4. Attach the 'cachepool' to the thinpool # lvconvert --type cache vg1/cachelv vg1/gluster_thinpool_sdb Without thinpool (ie.) with thick LVs ------------------------------------- Let's say one of the thick LV name is 'lv1' 1. Add the SSD to the VG # vgextend gluster_vg_sdb /dev/sdc 2. Create 'cachelv' # lvcreate -n cachelv -L 220G vg1 /dev/sdc 3. Create 'cachepool' # lvconvert --type cache-pool vg1/cachelv 4. Attach the 'cachepool' to the thick LV ( as per requirement ) # lvconvert --type cache vg1/cachelv vg1/lv1 The dependent gdeploy fix is not accepted for the change, as gdeploy is not ready to accept changes in the code. Also we hear that lvmcache + HC, is not really doing perf gain as expected. We should not consider enabling lvmcache for RHHI setup overall. If this fix is highly required, then this should have proper acks for this issue to be fixed, till then this will remain the known issue Known issue ----------- Using cockpit deployment, lvmcache can't be enabled for all thick LV configurations Workaround ----------- In the case of all thick LVs, LV cache could be attached to one of the thick LV with the following steps: Let's say one of the thick LV name is 'gluster_thick_lv1' under the volume group 'gluster_vg_sdb' 1. Add the SSD to the VG # vgextend gluster_vg_sdb /dev/sdc 2. Create 'cachelv' # lvcreate -n cachelv -L 220G gluster_vg_sdb /dev/sdc 3. Create 'cachepool' # lvconvert --type cache-pool gluster_vg_sdb/cachelv 4. Attach the 'cachepool' to the thick LV ( as per requirement ) # lvconvert --type cache gluster_vg_sdb/cachelv gluster_vg_sdb/gluster_thick_lv1 I've run into this while setting up hyperconverged RHHI-V per the documentation at https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/1.5/html/deploying_red_hat_hyperconverged_infrastructure_for_virtualization/task-config-rhgs-using-cockpit Did I understand correctly that we shouldn't generally configure an LV cache when deploying in a 3 node configuration? Quote: > Also we hear that lvmcache + HC, is not really doing perf gain as expected. > We should not consider enabling lvmcache for RHHI setup overall. I wondering about the same question as Leif. I did some tests on a RAID5 replica3 cluster with no SSD. Perf was really not good. I tried to follow those steps to add an LV Cache made on ram device but saw no performance increase. Is LV Cache tested and recommended by RedHat? We are budgeting for production cluster and need to make hardware choices soonish. any official guidance there would be great cause this is confusing (In reply to Guillaume Pavese from comment #4) > I wondering about the same question as Leif. I did some tests on a RAID5 > replica3 cluster with no SSD. Perf was really not good. I tried to follow > those steps to add an LV Cache made on ram device but saw no performance > increase. > > Is LV Cache tested and recommended by RedHat? > > We are budgeting for production cluster and need to make hardware choices > soonish. any official guidance there would be great cause this is confusing LV cache has been tested, however the performance improvements are very workload specific. There's been no noticeable gain that we could see across all workload with the latest lvmcache. I would suggest that you test for your workload before using it in production This bug is not required, as VDO can now be created on top of thinpool, with the update VDO systemd unit file. Look for solution from the bug - https://bugzilla.redhat.com/show_bug.cgi?id=1600156 Till the fix is in place, this bug will live as a known_issue With cockpit-ovirt-dashboard-0.13.8-1, VDO is supported with thinpool and there is no thick LV in RHHI-V deployment. So there is no requirement to attach LVM cache to thinpool. With this situation in my mind, closing this bug (In reply to SATHEESARAN from comment #9) > With cockpit-ovirt-dashboard-0.13.8-1, VDO is supported with thinpool and > there is no thick LV in RHHI-V deployment. > > So there is no requirement to attach LVM cache to thinpool. Correction, there is no requirement to attach LVM cache to thick LVs > > With this situation in my mind, closing this bug |