Bug 1554241 - Problem attaching lvmcache to thinpool
Summary: Problem attaching lvmcache to thinpool
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhhi
Version: rhhiv-1.5
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Gobinda Das
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On: 1554242
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-12 08:40 UTC by SATHEESARAN
Modified: 2019-05-07 14:36 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
When bricks are configured asymmetrically, and a logical cache volume is configured, the cache volume is attached to only one brick. This is because the current implementation of asymmetric brick configuration creates a separate volume group and thin pool for each device, so asymmetric brick configurations would require a cache volume per device. However, this would use a large number of cache devices, and is not currently possible to configure using Cockpit. To work around this issue, first remove any cache volumes that have been applied to an asymmetric brick set. # lvconvert --uncache volume_group/logical_cache_volume Then, follow the instructions in https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure_for_virtualization/2.0/html-single/maintaining_red_hat_hyperconverged_infrastructure_for_virtualization/#config-lvmcache to create a logical cache volume manually.
Clone Of:
: 1554242 (view as bug list)
Environment:
Last Closed: 2019-05-07 14:36:16 UTC
Embargoed:


Attachments (Terms of Use)

Description SATHEESARAN 2018-03-12 08:40:05 UTC
Description of problem:
-----------------------
With the current implementation ( with asymmetric brick configuration ), there could separate thinpools for each device. But lvmcache could be attached to only one thinpool. Need to have a clarity on this. 

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
ovirt-cockpit-dashboard-0.11.14

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Start RHHI installation via cockpit
2. For 'host1', select 'sdb' for vmstore volume brick and select 'sdc' for data volume brick
3. Enable lvmcache

Actual results:
---------------
lvmcache is attached to one of the thinpool on one of the device 

Expected results:
-----------------
All the bricks ( XFS filesystems ) should able to make use of lvmcache 

Additional info:
----------------
There are 2 options possible. 
1. SSD could be partitioned and could act as lvmcache for each of the thinpool or 
2. one thinpool needs to be created out of multiple devices. 

But the latter approach is against the perf recommendation for RHGS, where 1 Disk -> 1 PV -> 1 VG -> 1 thinpool -> lv is preferred

Comment 1 Sahina Bose 2018-07-06 13:23:55 UTC
This needs to be retargeted. Sas, can we move it out of in-flight tracker

Comment 2 SATHEESARAN 2018-07-11 10:36:11 UTC
(In reply to Sahina Bose from comment #1)
> This needs to be retargeted. Sas, can we move it out of in-flight tracker

Sahina,

This will be a problem, when user tries to create bricks from separate devices and wanted to attach LVM cache to all thinpools or specific thinpool

We should document this behavior as the known_issue with the workaround

Comment 3 Sahina Bose 2018-07-16 04:45:04 UTC
Please provide doc_text for known issue

Comment 5 Gobinda Das 2018-07-24 06:30:08 UTC
Looks good to me.

Comment 6 Sahina Bose 2019-05-07 14:36:16 UTC
lvmcache is not high priority as there's no consistent performance improvement seen. Deferring this for now


Note You need to log in before you can comment on or make changes to this bug.