Bug 1429734 - [HCI] Documentation on setting up cache
Summary: [HCI] Documentation on setting up cache
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: Documentation
Version: rhgs-3.2
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Laura Bailey
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-2
TreeView+ depends on / blocked
 
Reported: 2017-03-07 00:22 UTC by Laura Bailey
Modified: 2017-09-01 06:42 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-29 04:12:04 UTC


Attachments (Terms of Use)

Comment 3 SATHEESARAN 2017-03-07 06:43:26 UTC
Laura,

Is this bug to address the changes in Beta Doc or Grafton GA doc ?

This feature of setting up cache from cockpit is not going to happen with Grafton GA. So we need to provide instructions to how to setup cache even for Grafton GA doc.

Comment 4 RamaKasturi 2017-03-07 07:05:41 UTC
(In reply to SATHEESARAN from comment #3)
> Laura,
> 
> Is this bug to address the changes in Beta Doc or Grafton GA doc ?
> 
> This feature of setting up cache from cockpit is not going to happen with
> Grafton GA. So we need to provide instructions to how to setup cache even
> for Grafton GA doc.

sas,

   I have sent this as one of my review comment to laura to address the changes in Beta Doc. As i learnt today morning from  rameshN that this bug is not going to be fixed for GA i think we need to document this for GA as well. 

 I have  provided steps required to setup cache in the description so laura can use the same steps for GA doc too right ?

Thanks
kasturi

Comment 5 Laura Bailey 2017-03-07 09:59:30 UTC
This bug was to address the issue for GA, but if we complete the instructions in a way that can work for Beta I'm happy to update the Beta document.

Based on Kasturi's comments, it looks as though these updates will still be required for Grafton GA. Sas, can you confirm?

Comment 6 SATHEESARAN 2017-03-07 11:56:46 UTC
(In reply to Laura Bailey from comment #5)
> This bug was to address the issue for GA, but if we complete the
> instructions in a way that can work for Beta I'm happy to update the Beta
> document.
> 
> Based on Kasturi's comments, it looks as though these updates will still be
> required for Grafton GA. Sas, can you confirm?

Yes. Its required for Grafton GA.

The content is good for document, except for minor changes.

Here is the current content as proposed in comment0
<current_content>
[vg2]
action=extend
vgname=gluster_vg_sdc
pvname=sdb

[lv5]
action=setup-cache
ssd=sda
vgname=gluster_vg_sdc
poolname=lvthinpool
cache_lv=lvcache
cache_lvsize=180GB
</current_content>
--------------------------------------------------------------
<updated_content>

To enable lvmcache[1], perform the following:

1. On cockpit UI, click on the 'Review' section of 'Gluster Deployment', select 'Edit', and add the following content following the [lv4] section.

[lv5]
action=setup-cache
ssd=<SSD_Device>
vgname=<vgname>
poolname=<poolname>
cache_lv=<cache_name>
cache_lvsize=<cache_size>
cachemode=<cache_mode>

SSD_Device - SSD device, eg. if /dev/sdc is the SSD drive,then ssd=sdc
vgname     - value of 'vgname' found under '[vg1]' of generated gdeploy config 
             file
poolname   - value of 'poolname' found under '[lv1]' of generated gdeploy config                        
             file
cache_name - Name for the cache_data logical volume
cache_size - Size of the cache_data logical volume. Note to reserve 1/1000th of 
             space for cache_meta LV. For example, if you have 1000GB SSD, then                
             cache_size should be 999 GB.
cache_mode - writethrough or writeback based on workload. Writethrough is            
             recommended for most workload

More information about lvmcache should be available here,
[1] - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/lv_overview.html#cache_volumes

</updated_content>

Comment 7 Paul Cuzner 2017-03-10 03:14:40 UTC
LGTM..one question though;
- if cachemode is allowed to be writeback - how is that cache protected? In writeback mode shouldn't the resulting configuration be a mirror. If gdeploy is not accounting for that, perhaps the text should carry a warning?

Comment 8 SATHEESARAN 2017-03-10 06:13:53 UTC
(In reply to Paul Cuzner from comment #7)
> LGTM..one question though;
> - if cachemode is allowed to be writeback - how is that cache protected? In
> writeback mode shouldn't the resulting configuration be a mirror. If gdeploy
> is not accounting for that, perhaps the text should carry a warning?

Thanks Paul. Right, gdeploy doesn't accounts for cache protection, when cache-mode is chosen as writeback

We need to add the suggestion from Deployment guide to the docs.


<snip>
   IMPORTANT:
   writeback mode must be implemented with a minimum of 2 x SSD/NVMe drives  
   configured with mirroring (LV mirroring or H/W RAID-1) to ensure write  
   durability.
</snip>

Does that sound ok Paul ?

Comment 9 Paul Cuzner 2017-03-13 04:14:12 UTC
I'd probably say 

"IMPORTANT
To avoid the potential of data loss when implementing lvmcache in writeback mode, 2 separate SSD/NVMe devices are highly recommended. By configuring the 2 devices in a RAID-1 configuration (via software or hardware), the potential of data loss from lost writes is reduced significantly."

But it's just really nit-picking.

Comment 10 SATHEESARAN 2017-03-14 04:01:47 UTC
(In reply to Paul Cuzner from comment #9)
> I'd probably say 
> 
> "IMPORTANT
> To avoid the potential of data loss when implementing lvmcache in writeback
> mode, 2 separate SSD/NVMe devices are highly recommended. By configuring the
> 2 devices in a RAID-1 configuration (via software or hardware), the
> potential of data loss from lost writes is reduced significantly."
> 
> But it's just really nit-picking.

Thanks Paul for that changes. Prefer to have such a clean and clear information in the guides.

Comment 20 RamaKasturi 2017-04-04 10:50:23 UTC
Laura,

  If data and vmstore are arbiter volumes then cache needs to be configured only on nodes where only data bricks are not present and gdeploy section would look something like this

[lv5:{10.70.X1:Y1,10.70.X2.Y2}]
action=setup-cache
ssd=<SSD_Device>
vgname=<vgname>
poolname=<poolname>
cache_lv=<cache_name>
cache_lvsize=<cache_size>
cachemode=<cache_mode>
 
Can we add this  to the document ?

Comment 21 RamaKasturi 2017-04-04 10:50:51 UTC
(In reply to RamaKasturi from comment #20)
> Laura,
> 
>   If data and vmstore are arbiter volumes then cache needs to be configured
> only on nodes where only data bricks are  present and gdeploy section
> would look something like this
> 
> [lv5:{10.70.X1:Y1,10.70.X2.Y2}]
> action=setup-cache
> ssd=<SSD_Device>
> vgname=<vgname>
> poolname=<poolname>
> cache_lv=<cache_name>
> cache_lvsize=<cache_size>
> cachemode=<cache_mode>
>  
> Can we add this  to the document ?

Comment 28 SATHEESARAN 2017-04-06 17:01:29 UTC
Laura,

I find that the explanation for each value of the option is not explained clearly.
Is it avoided due to some reason or missed ?

SSD_Device - SSD device, eg. if /dev/sdc is the SSD drive,then ssd=sdc
vgname     - value of 'vgname' found under '[vg1]' of generated gdeploy config 
             file
poolname   - value of 'poolname' found under '[lv1]' of generated gdeploy config                        
             file
cache_name - Name for the cache_data logical volume
cache_size - Size of the cache_data logical volume. Note to reserve 1/1000th of 
             space for cache_meta LV. For example, if you have 1000GB SSD, then                
             cache_size should be 999 GB.
cache_mode - writethrough or writeback based on workload. Writethrough is            
             recommended for most workload

Comment 34 SATHEESARAN 2017-04-13 03:55:05 UTC
(In reply to SATHEESARAN from comment #28)
> Laura,
> 
> I find that the explanation for each value of the option is not explained
> clearly.
> Is it avoided due to some reason or missed ?
> 
> SSD_Device - SSD device, eg. if /dev/sdc is the SSD drive,then ssd=sdc
> vgname     - value of 'vgname' found under '[vg1]' of generated gdeploy
> config 
>              file
> poolname   - value of 'poolname' found under '[lv1]' of generated gdeploy
> config                        
>              file
> cache_name - Name for the cache_data logical volume
> cache_size - Size of the cache_data logical volume. Note to reserve 1/1000th
> of 
>              space for cache_meta LV. For example, if you have 1000GB SSD,
> then                
>              cache_size should be 999 GB.
> cache_mode - writethrough or writeback based on workload. Writethrough is   
> 
>              recommended for most workload


This has nothing to do with the documentation, but thought of correcting the statement that I have recorded in this bug earlier.

The 'cache_size' should be the size of the cache that is allocated. The description that I have given above is for cache_metadata size calculation.
Note in this case ( gdeploy ), we have left it to LVM to compute the cache_metadata size, based on the size of cache.

Comment 35 RamaKasturi 2017-04-13 13:23:52 UTC
Reviewed the content in the link provided at https://bugzilla.redhat.com/show_bug.cgi?id=1429734#c32 and looks good to me. Moving this bug to verified state.

Comment 36 Laura Bailey 2017-05-31 05:34:38 UTC
Note that this bug is countermanded by Bug 1457072 and this content won't be included in the RHHI 1.0 docs.

Comment 37 Laura Bailey 2017-08-29 04:12:04 UTC
Fixed in RHGS 3.3 documentation.


Note You need to log in before you can comment on or make changes to this bug.