Laura, Is this bug to address the changes in Beta Doc or Grafton GA doc ? This feature of setting up cache from cockpit is not going to happen with Grafton GA. So we need to provide instructions to how to setup cache even for Grafton GA doc.
(In reply to SATHEESARAN from comment #3) > Laura, > > Is this bug to address the changes in Beta Doc or Grafton GA doc ? > > This feature of setting up cache from cockpit is not going to happen with > Grafton GA. So we need to provide instructions to how to setup cache even > for Grafton GA doc. sas, I have sent this as one of my review comment to laura to address the changes in Beta Doc. As i learnt today morning from rameshN that this bug is not going to be fixed for GA i think we need to document this for GA as well. I have provided steps required to setup cache in the description so laura can use the same steps for GA doc too right ? Thanks kasturi
This bug was to address the issue for GA, but if we complete the instructions in a way that can work for Beta I'm happy to update the Beta document. Based on Kasturi's comments, it looks as though these updates will still be required for Grafton GA. Sas, can you confirm?
(In reply to Laura Bailey from comment #5) > This bug was to address the issue for GA, but if we complete the > instructions in a way that can work for Beta I'm happy to update the Beta > document. > > Based on Kasturi's comments, it looks as though these updates will still be > required for Grafton GA. Sas, can you confirm? Yes. Its required for Grafton GA. The content is good for document, except for minor changes. Here is the current content as proposed in comment0 <current_content> [vg2] action=extend vgname=gluster_vg_sdc pvname=sdb [lv5] action=setup-cache ssd=sda vgname=gluster_vg_sdc poolname=lvthinpool cache_lv=lvcache cache_lvsize=180GB </current_content> -------------------------------------------------------------- <updated_content> To enable lvmcache[1], perform the following: 1. On cockpit UI, click on the 'Review' section of 'Gluster Deployment', select 'Edit', and add the following content following the [lv4] section. [lv5] action=setup-cache ssd=<SSD_Device> vgname=<vgname> poolname=<poolname> cache_lv=<cache_name> cache_lvsize=<cache_size> cachemode=<cache_mode> SSD_Device - SSD device, eg. if /dev/sdc is the SSD drive,then ssd=sdc vgname - value of 'vgname' found under '[vg1]' of generated gdeploy config file poolname - value of 'poolname' found under '[lv1]' of generated gdeploy config file cache_name - Name for the cache_data logical volume cache_size - Size of the cache_data logical volume. Note to reserve 1/1000th of space for cache_meta LV. For example, if you have 1000GB SSD, then cache_size should be 999 GB. cache_mode - writethrough or writeback based on workload. Writethrough is recommended for most workload More information about lvmcache should be available here, [1] - https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Logical_Volume_Manager_Administration/lv_overview.html#cache_volumes </updated_content>
LGTM..one question though; - if cachemode is allowed to be writeback - how is that cache protected? In writeback mode shouldn't the resulting configuration be a mirror. If gdeploy is not accounting for that, perhaps the text should carry a warning?
(In reply to Paul Cuzner from comment #7) > LGTM..one question though; > - if cachemode is allowed to be writeback - how is that cache protected? In > writeback mode shouldn't the resulting configuration be a mirror. If gdeploy > is not accounting for that, perhaps the text should carry a warning? Thanks Paul. Right, gdeploy doesn't accounts for cache protection, when cache-mode is chosen as writeback We need to add the suggestion from Deployment guide to the docs. <snip> IMPORTANT: writeback mode must be implemented with a minimum of 2 x SSD/NVMe drives configured with mirroring (LV mirroring or H/W RAID-1) to ensure write durability. </snip> Does that sound ok Paul ?
I'd probably say "IMPORTANT To avoid the potential of data loss when implementing lvmcache in writeback mode, 2 separate SSD/NVMe devices are highly recommended. By configuring the 2 devices in a RAID-1 configuration (via software or hardware), the potential of data loss from lost writes is reduced significantly." But it's just really nit-picking.
(In reply to Paul Cuzner from comment #9) > I'd probably say > > "IMPORTANT > To avoid the potential of data loss when implementing lvmcache in writeback > mode, 2 separate SSD/NVMe devices are highly recommended. By configuring the > 2 devices in a RAID-1 configuration (via software or hardware), the > potential of data loss from lost writes is reduced significantly." > > But it's just really nit-picking. Thanks Paul for that changes. Prefer to have such a clean and clear information in the guides.
Laura, If data and vmstore are arbiter volumes then cache needs to be configured only on nodes where only data bricks are not present and gdeploy section would look something like this [lv5:{10.70.X1:Y1,10.70.X2.Y2}] action=setup-cache ssd=<SSD_Device> vgname=<vgname> poolname=<poolname> cache_lv=<cache_name> cache_lvsize=<cache_size> cachemode=<cache_mode> Can we add this to the document ?
(In reply to RamaKasturi from comment #20) > Laura, > > If data and vmstore are arbiter volumes then cache needs to be configured > only on nodes where only data bricks are present and gdeploy section > would look something like this > > [lv5:{10.70.X1:Y1,10.70.X2.Y2}] > action=setup-cache > ssd=<SSD_Device> > vgname=<vgname> > poolname=<poolname> > cache_lv=<cache_name> > cache_lvsize=<cache_size> > cachemode=<cache_mode> > > Can we add this to the document ?
Laura, I find that the explanation for each value of the option is not explained clearly. Is it avoided due to some reason or missed ? SSD_Device - SSD device, eg. if /dev/sdc is the SSD drive,then ssd=sdc vgname - value of 'vgname' found under '[vg1]' of generated gdeploy config file poolname - value of 'poolname' found under '[lv1]' of generated gdeploy config file cache_name - Name for the cache_data logical volume cache_size - Size of the cache_data logical volume. Note to reserve 1/1000th of space for cache_meta LV. For example, if you have 1000GB SSD, then cache_size should be 999 GB. cache_mode - writethrough or writeback based on workload. Writethrough is recommended for most workload
(In reply to SATHEESARAN from comment #28) > Laura, > > I find that the explanation for each value of the option is not explained > clearly. > Is it avoided due to some reason or missed ? > > SSD_Device - SSD device, eg. if /dev/sdc is the SSD drive,then ssd=sdc > vgname - value of 'vgname' found under '[vg1]' of generated gdeploy > config > file > poolname - value of 'poolname' found under '[lv1]' of generated gdeploy > config > file > cache_name - Name for the cache_data logical volume > cache_size - Size of the cache_data logical volume. Note to reserve 1/1000th > of > space for cache_meta LV. For example, if you have 1000GB SSD, > then > cache_size should be 999 GB. > cache_mode - writethrough or writeback based on workload. Writethrough is > > recommended for most workload This has nothing to do with the documentation, but thought of correcting the statement that I have recorded in this bug earlier. The 'cache_size' should be the size of the cache that is allocated. The description that I have given above is for cache_metadata size calculation. Note in this case ( gdeploy ), we have left it to LVM to compute the cache_metadata size, based on the size of cache.
Reviewed the content in the link provided at https://bugzilla.redhat.com/show_bug.cgi?id=1429734#c32 and looks good to me. Moving this bug to verified state.
Note that this bug is countermanded by Bug 1457072 and this content won't be included in the RHHI 1.0 docs.
Fixed in RHGS 3.3 documentation.