Bug 1411259 - While creating LV thinpool with RAID6 disktype, chunksize is computed incorrectly
Summary: While creating LV thinpool with RAID6 disktype, chunksize is computed incorre...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.2.0
Assignee: Sachidananda Urs
QA Contact: SATHEESARAN
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-2 1351528
TreeView+ depends on / blocked
 
Reported: 2017-01-09 09:39 UTC by SATHEESARAN
Modified: 2020-09-10 10:06 UTC (History)
6 users (show)

Fixed In Version: gdeploy-2.0.1-11
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-03-23 05:09:22 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:0482 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.2.0 gdeploy bug fix and enhancement update 2017-03-23 09:06:28 UTC

Description SATHEESARAN 2017-01-09 09:39:59 UTC
Description of problem:
-----------------------
With RAID6 disktype, with diskcount & stripe-size available, the chunksize for LV thinpool creation is computed incorrectly.

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
gdeploy-2.0.1-8.el7rhgs

How reproducible:
-----------------
Always

Steps to Reproduce:
-------------------
1. Create config file as follows

[disktype]
raid6
[diskcount]
4
[stripesize]
256
[pv]
action=create
devices=vdb

[vg1]
action=create
vgname=RHGS_vg1
pvname=vdb

[lv]
action=create
vgname=RHGS_vg1
poolname=lvthinpool
lvtype=thinpool
poolmetadatasize=1MB
size=20GB

2.Check for the chunksize of the LV thinpool

Actual results:
---------------
'chunksize' is not the value that is expected.
It should be the product of stripe-size and number of data disks

Expected results:
------------------
'chunksize' for LV thin pool creation should be the product of stripe-size and number of data disks

Comment 3 SATHEESARAN 2017-01-09 09:43:38 UTC
This bug is the blocker and needs to be fixed, as Grafton GA installation relies on the calculation of correct chunksize for optimal configuration as recommended in the RHGS Admin guide[1]

[1] - https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html-single/Administration_Guide/index.html#Hardware_RAID

This would also affect other RHGS installation as the creation of LV thinpool will use a wrong value for 'chunksize' attribute

Comment 4 Sachidananda Urs 2017-01-09 09:48:50 UTC
https://github.com/gluster/gdeploy/commit/2455587e365c5e7 should resolve the issue. This bug was fixed in upstream as part of fix to: https://github.com/gluster/gdeploy/issues/251

Comment 5 surabhi 2017-01-09 11:33:42 UTC
As discussed in blocker triage meeting providing qa_ack.

Comment 7 SATHEESARAN 2017-02-06 10:50:05 UTC
Tested with gdeploy-2.0.1-9.el7rhgs and it failed verification, as the chunk size always remains to be at 1MB, irrespective of change in number of disks.

The following config file was used:
[root@dhcp37-196 ~]# cat hc.conf 
# A sample configuration file to setup ROBO

[hosts]
host1.lab.eng.blr.redhat.com

[disktype]
raid6

[diskcount]
8

[stripesize]
256

[pv]
action=create
devices=vdb

[vg1]
action=create
vgname=RHGS_vg1
pvname=vdb

[lv1]
action=create
vgname=RHGS_vg1
lvname=engine_lv
lvtype=thick
size=10GB
mount=/rhgs/brick1

[lv2]
action=create
vgname=RHGS_vg1
poolname=lvthinpool
lvtype=thinpool
poolmetadatasize=10MB
chunksize=1024k
size=30GB

[lv3]
action=create
lvname=lv_vmaddldisks
poolname=lvthinpool
vgname=RHGS_vg1
lvtype=thinlv
mount=/rhgs/brick2
virtualsize=9GB

Here is the chunksize as reported from lv:
#lvs -o chunksize /dev/mapper/RHGS_vg1-lvthinpool
  Chunk
  1.00m

Comment 8 SATHEESARAN 2017-02-06 11:24:17 UTC
I made a mistake as the chunksize is hardcoded in the config file and that made the chunksize remain the same for different disk count. Just noticed this issue.

Let me retry verification having this hardcoded value removed

Comment 9 SATHEESARAN 2017-02-06 11:43:11 UTC
Tested again with gdeploy-2.0.1-9.el7rhgs and found that the chunksize is constant value of 192k for different diskcount values. Hence this bug failed QA verification

Below is the config file used:
[hosts]
host.example.com

[disktype]
raid6

[diskcount]
4

[stripesize]
256

[pv]
action=create
devices=vdb

[vg1]
action=create
vgname=RHGS_vg1
pvname=vdb

[lv1]
action=create
vgname=RHGS_vg1
lvname=engine_lv
lvtype=thick
size=10GB
mount=/rhgs/brick1

[lv2]
action=create
vgname=RHGS_vg1
poolname=lvthinpool
lvtype=thinpool
poolmetadatasize=10MB
size=30GB

[lv3]
action=create
lvname=lv_vmaddldisks
poolname=lvthinpool
vgname=RHGS_vg1
lvtype=thinlv
mount=/rhgs/brick2
virtualsize=9GB

# lvs -o chunksize /dev/mapper/RHGS_vg1-lvthinpool
  Chunk  
  192.00k

Comment 10 Atin Mukherjee 2017-02-06 12:09:58 UTC
I checked the code [1] and as per the logic implemented it should return 256 * 4 i.e. 1024 k, not sure how are we arriving at 192k.

I've asked Devyani to looking into it.

[1] https://github.com/gluster/gdeploy/pull/260/files

Comment 11 Sachidananda Urs 2017-02-12 03:47:07 UTC
sas, the cause for this bug is:

We used to compute the chunksize in lv module and the required parameters were not accessible to modules hence it used to default to a value chosen by lvm.

Now the chunksize computation is moved out of module to lv feature. And the chunksize is picked as param in the module.

The changes are committed: https://github.com/gluster/gdeploy/commit/043d8448dc

I tested with various values:

0. Specifying in the chunksize= variable within lv section.
1. Specifying diskcount and stripesize
2. Leaving to default (i.e not providing any in the config.

[root@rhgs2 ~]# lvs -o chunk_size /dev/mapper/RHGS_vg1-lvthinpool
  Chunk
  4.00m
[root@rhgs2 ~]# lvs -o chunk_size /dev/mapper/RHGS_vg1-lvthinpool
  Chunk 
  40.00m
[root@rhgs2 ~]# lvs -o chunk_size /dev/mapper/RHGS_vg1-lvthinpool                                                                                                                                                  
  Chunk 
  64.00k

Results look good, request more testing.

Comment 12 SATHEESARAN 2017-02-14 07:09:37 UTC
(In reply to Sachidananda Urs from comment #11)
> sas, the cause for this bug is:
> 
> We used to compute the chunksize in lv module and the required parameters
> were not accessible to modules hence it used to default to a value chosen by
> lvm.
> 
> Now the chunksize computation is moved out of module to lv feature. And the
> chunksize is picked as param in the module.
> 
> The changes are committed:
> https://github.com/gluster/gdeploy/commit/043d8448dc
> 
> I tested with various values:
> 
> 0. Specifying in the chunksize= variable within lv section.
> 1. Specifying diskcount and stripesize
> 2. Leaving to default (i.e not providing any in the config.
> 
> [root@rhgs2 ~]# lvs -o chunk_size /dev/mapper/RHGS_vg1-lvthinpool
>   Chunk
>   4.00m
> [root@rhgs2 ~]# lvs -o chunk_size /dev/mapper/RHGS_vg1-lvthinpool
>   Chunk 
>   40.00m
> [root@rhgs2 ~]# lvs -o chunk_size /dev/mapper/RHGS_vg1-lvthinpool           
> 
>   Chunk 
>   64.00k
> 
> Results look good, request more testing.

Thanks Sac for the information.
I have verified with other tests too ( with the scratch build 2.0.1-10 from Sac) and all looks good

Comment 13 SATHEESARAN 2017-02-17 16:29:56 UTC
Tested with gdeploy-2.0.1-11.el7rhgs and found that the chunksize is set properly.

With the mention of diskcount as '10' and stripesize as '256', chunksize is set correctly as 2.50MB

Marking the bug as VERIFIED

Comment 15 errata-xmlrpc 2017-03-23 05:09:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0482.html


Note You need to log in before you can comment on or make changes to this bug.