Document URL: 3.1 ->https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html-single/Administration_Guide/index.html#chap-Red_Hat_Storage_Volumes-gdeploy 3.2 -> https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html-single/administration_guide/#chap-Red_Hat_Storage_Volumes-gdeploy Section Number and Name: 3.1 -> "6.1. Setting up Gluster Storage Volumes using gdeploy" 3.2 -> "5.1. Setting up Gluster Storage Volumes using gdeploy" Describe the issue: Documentation does not provide guidance related to preventing thinpool metadata from reaching 100% Suggestions for improvement: 1. Document guidance that ensures customers are aware that during thinpool creation, they should not allow/set a small chunk size when there is a large thinpool (i.e. "If thinpool is size <X>, set chunk size <Y>.") 2. Document additional guidance as determined by GSS (forthcoming). Additional information: Related to customer case #01824491 (https://access.redhat.com/support/cases/#/case/01824491) Document URL: Section Number and Name: Describe the issue: Suggestions for improvement: Additional information:
@Anjana : Can we get this added to 3.3 doc tracker bug.
(In reply to Bipin Kunal from comment #11) > @Anjana : Can we get this added to 3.3 doc tracker bug. Added to the 3.3 tracker, Bipin.
*** Bug 1464220 has been marked as a duplicate of this bug. ***
Some experiments on a setup with 2 devices: a RAID-6 virtual drive of ~18TB and a single drive of 1.8TB. sdc 8:32 0 18.2T 0 disk sde 8:64 0 1.8T 0 disk kernel and lvm versions: kernel-3.10.0-862.el7.x86_64 lvm2-2.02.177-4.el7.x86_64 Are the defaults picked by lvm appropriate for gluster use, which includes snapshots? 1. On /dev/sde: command: lvcreate --thinpool rhs_vg1/rhs_tp1 --extents 100%FREE --zero n output: Thin pool volume with chunk size 1.00 MiB can address at most 253.00 TiB of data. dmsetup table output: rhs_vg1-rhs_tp1: 0 3905445888 thin-pool 253:0 253:1 2048 0 1 skip_block_zeroing So, chunk size selected for the 1.8T PV is 1MiB, which larger than we would like. And with larger drives, say 6TB, the default chunk size picked by LVM would be unacceptably large. 2. Here's what happens when you specify a thin pool size of 6T on /dev/sdc: command: lvcreate --thinpool rhs_vg1/rhs_tp1 --size 6T --zero n output: Thin pool volume with chunk size 4.00 MiB can address at most 1012.00 TiB of data. So in this case, chunk size chosen is 4MiB, which is way too large for use with snapshots. 3. On creating a thin pool, using all the space in the RAID-6 device /dev/sdc: command: lvcreate --thinpool rhs_vg1/rhs_tp1 --extents 100%FREE --zero n output: Thin pool volume with chunk size 16.00 MiB can address at most 3.95 PiB of data. In summary, I don't think we can simply go with the LVM defaults.
Starting with: https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/html-single/administration_guide/#Brick_Configuration I'd suggest adding the following: <quote> Adjusting chunk size for very large devices In very rare cases, where extremely large capacity devices are used, the thin pool chunksize chosen based on the above recommendation may prove to be too small. Use the calculation below to adjust the thin pool chunk size so as to ensure that the entire device can be used without running out of space on the thin pool metadata device. Note that in this case, the poolmetadatasize should be set to the maximum, which is 16GiB. addressible_size_in_tb = 15*(recommended_chunk_size_in_kb / 64) If addressible_size_in_tb is smaller than the device size, the chunk size should be adjusted as per the calculation below: adjustment_factor = ceiling(device_size_in_tb / addressible_size_in_tb) final_chunk_size_in_kb = recommended_chunk_size_in_kb * adjustment_factor </quote>
(In reply to Manoj Pillai from comment #36) > Starting with: > https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/ > html-single/administration_guide/#Brick_Configuration > > I'd suggest adding the following: > > <quote> > Adjusting chunk size for very large devices > > In very rare cases, where extremely large capacity devices are used, > the thin pool chunksize chosen based on the above recommendation may > prove to be too small. Use the calculation below to adjust the thin > pool chunk size so as to ensure that the entire device can be used > without running out of space on the thin pool metadata device. > > Note that in this case, the poolmetadatasize should be set to the > maximum, which is 16GiB. > > addressible_size_in_tb = 15*(recommended_chunk_size_in_kb / 64) > > If addressible_size_in_tb is smaller than the device size, the chunk > size should be adjusted as per the calculation below: > > adjustment_factor = ceiling(device_size_in_tb / addressible_size_in_tb) > final_chunk_size_in_kb = recommended_chunk_size_in_kb * adjustment_factor > </quote> Why is this in the documentation and not part of the deployment code?
(In reply to Yaniv Kaul from comment #37) > (In reply to Manoj Pillai from comment #36) > > Starting with: > > https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/ > > html-single/administration_guide/#Brick_Configuration > > > > I'd suggest adding the following: > > > > <quote> > > Adjusting chunk size for very large devices [...] > > Why is this in the documentation and not part of the deployment code? The expectation is that it will be. The recommendations in this doc chapter are encoded in the deployment code.
(In reply to Manoj Pillai from comment #38) > (In reply to Yaniv Kaul from comment #37) > > (In reply to Manoj Pillai from comment #36) > > > Starting with: > > > https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.4/ > > > html-single/administration_guide/#Brick_Configuration > > > > > > I'd suggest adding the following: > > > > > > <quote> > > > Adjusting chunk size for very large devices > [...] > > > > Why is this in the documentation and not part of the deployment code? > > The expectation is that it will be. The recommendations in this doc chapter > are encoded in the deployment code. I will raise a bug to incorporate these recommendations in gluster-ansible while setting up backend.
(In reply to Sachidananda Urs from comment #40) > > I will raise a bug to incorporate these recommendations in gluster-ansible > while setting up backend. I'd like to give folks on this bz a chance to comment before finalizing it as a recommendation.
No changes requested to what I proposed in comment #36. Laura, anything else you need from me on this bz?