Description of problem: Need more volume set options to be present in hc.conf. Following are the options. locking-scheme=granular , shd-max-threads=<number of threads> and cluster.shd-wait-qlength=<length> Version-Release number of selected component (if applicable): gdeploy-2.0-16.el7rhgs.noarch How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: As of now options listed in description are not present in the hc sample conf file. we would need this options on a volume for hc. Expected results: Options listed in description should be present in the sample hc.conf. Additional info:
waiting for the recommended numbers about shd-max-threads and cluster.shd-wait-qlength from performance team. sahina, do you have any info on this number ?
Nope, I don't have the updates. Ben, could you provide the recommendation for VM store use case in HC mode
Apart from the options mentioned in comment0, the following options are to be set on all the volumes. keys value network.ping-timout 30 user.cifs off nfs.disable enable Please update the gdeploy conf file hc.conf to reflect the same
(In reply to SATHEESARAN from comment #4) > Apart from the options mentioned in comment0, the following options are to > be set on all the volumes. > > keys value > network.ping-timout 30 > user.cifs off > nfs.disable enable > > Please update the gdeploy conf file hc.conf to reflect the same There is a separate bug[1] raised to update network.ping-timeout value in HCI standard template [1] - https://bugzilla.redhat.com/show_bug.cgi?id=1346244
For reference, the testing I have done in BAGL has used the following cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 6 With these settings self heal ran at around 250MB/s within it's systemd slice of 600% At 6 threads, the rate of improvement started to decline so that's probably where I'd pin the default. Interested to see what Ben's tests show.
Hey, I have been assigned this bug. Here is the PR: https://github.com/gluster/gdeploy/pull/131 which fixes this issue, and also RHBZ #1346244. Change Log: Added more volume set options in hc.conf file. keys value cluster.shd-wait-qlength, 10000 cluster.shd-max-threads, 6 network.ping-timeout, 30 user.cifs, off nfs.disable enable Can someone review this, thanks.
Hi Devayani, shd-max-threads is already present in the hc.conf file and key has be to changed to "cluster.shd-max-threads" and nfs.disable value has to be "on".others look good to me. There are some more options which needs to be changed and added. Could you please add them. 1) key "locking-scheme" has to be changed to "cluster.locking-scheme" 2) Add a new key called performance.strict-o-direct and set its value to on 3) Add a new key called network.remote-dio and set its value to off
Hi, Here is the PR: https://github.com/gluster/gdeploy/pull/136 This solves the issue. Hence, changing status of the issue. Thanks.
Verified and works fine with build gdeploy-2.0.1-2.el7rhgs.noarch. I see that following options and values are present in /usr/share/doc/gdeploy/examples/hc.conf. key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio value=virt,36,36,on,512MB,32,full,granular,10000,8,30,off,on,on,off As per the recommendation in comment 6, shd-max-threads is set to 6 in the hc.conf file and verified the same.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0260.html