Bug 1342519 - Need more volume set options in sample hc.conf.
Summary: Need more volume set options in sample hc.conf.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3 Async
Assignee: Devyani Kota
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-2 1351522
TreeView+ depends on / blocked
 
Reported: 2016-06-03 12:14 UTC by RamaKasturi
Modified: 2021-09-09 13:03 UTC (History)
9 users (show)

Fixed In Version: gdeploy-2.0.1-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-07 11:33:19 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0260 0 normal SHIPPED_LIVE Important: ansible and gdeploy security and bug fix update 2017-02-07 16:32:47 UTC

Description RamaKasturi 2016-06-03 12:14:02 UTC
Description of problem:
Need more volume set options to be present in hc.conf. Following are the options.
locking-scheme=granular , shd-max-threads=<number of threads> and cluster.shd-wait-qlength=<length>

Version-Release number of selected component (if applicable):
gdeploy-2.0-16.el7rhgs.noarch

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
As of now options listed in description are not present in the hc sample conf file. we would need this options on a volume for hc.

Expected results:
Options listed in description should be present in the sample hc.conf.

Additional info:

Comment 2 RamaKasturi 2016-06-03 12:16:31 UTC
waiting for the recommended numbers  about shd-max-threads and cluster.shd-wait-qlength from performance team. 

sahina, do you have any info on this number ?

Comment 3 Sahina Bose 2016-06-08 08:40:36 UTC
Nope, I don't have the updates.

Ben, could you provide the recommendation for VM store use case in HC mode

Comment 4 SATHEESARAN 2016-06-16 08:08:35 UTC
Apart from the options mentioned in comment0, the following options are to be set on all the volumes. 

   keys                        value
network.ping-timout              30
user.cifs                        off
nfs.disable                      enable

Please update the gdeploy conf file hc.conf to reflect the same

Comment 5 SATHEESARAN 2016-06-16 08:14:02 UTC
(In reply to SATHEESARAN from comment #4)
> Apart from the options mentioned in comment0, the following options are to
> be set on all the volumes. 
> 
>    keys                        value
> network.ping-timout              30
> user.cifs                        off
> nfs.disable                      enable
> 
> Please update the gdeploy conf file hc.conf to reflect the same

There is a separate bug[1] raised to update network.ping-timeout value in HCI standard template

[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1346244

Comment 6 Paul Cuzner 2016-07-13 22:06:47 UTC
For reference, the testing I have done in BAGL has used the following

cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 6

With these settings self heal ran at around 250MB/s within it's systemd slice of 600%

At 6 threads, the rate of improvement started to decline so that's probably where I'd pin the default.

Interested to see what Ben's tests show.

Comment 8 Devyani Kota 2016-08-11 14:15:29 UTC
Hey, I have been assigned this bug.
Here is the PR: https://github.com/gluster/gdeploy/pull/131
which fixes this issue, and also RHBZ #1346244.

Change Log:
Added more volume set options in hc.conf file.
    keys                     value
cluster.shd-wait-qlength,     10000
cluster.shd-max-threads,      6
network.ping-timeout,         30
user.cifs,                    off
nfs.disable                   enable

Can someone review this, thanks.

Comment 9 RamaKasturi 2016-08-16 07:44:40 UTC
Hi Devayani,

    shd-max-threads is already present in the hc.conf file and key has be to changed to "cluster.shd-max-threads" and nfs.disable value has to be "on".others look good to me.

    There are some more options which needs to be changed and added. Could you please add them.

1) key "locking-scheme" has to be changed to "cluster.locking-scheme"
2) Add a new key called performance.strict-o-direct and set its value to on
3) Add a new key called network.remote-dio and set its value to off

Comment 10 Devyani Kota 2016-08-19 09:04:29 UTC
Hi,
Here is the PR: https://github.com/gluster/gdeploy/pull/136
This solves the issue.
Hence, changing status of the issue.
Thanks.

Comment 11 RamaKasturi 2016-09-30 08:58:04 UTC
Verified and works fine with build gdeploy-2.0.1-2.el7rhgs.noarch.

I see that following options and values are present in /usr/share/doc/gdeploy/examples/hc.conf.

key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio
value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,on,off

As per the recommendation in comment 6, shd-max-threads is set to 6 in the hc.conf file and verified the same.

Comment 13 errata-xmlrpc 2017-02-07 11:33:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0260.html


Note You need to log in before you can comment on or make changes to this bug.