Bug 1356291

Summary: [RFE] Add directio settings for the volumes defined in hyperconverged configurations
Product: Red Hat Gluster Storage Reporter: Paul Cuzner <pcuzner>
Component: gdeployAssignee: Sachidananda Urs <surs>
Status: CLOSED ERRATA QA Contact: SATHEESARAN <sasundar>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: amukherj, rcyriac, sasundar, smohan
Target Milestone: ---Keywords: FutureFeature
Target Release: RHGS 3.2.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: gdeploy-2.0.1-1 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-03-23 04:57:31 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On:    
Bug Blocks: 1351503    

Description Paul Cuzner 2016-07-13 22:16:23 UTC
Description of problem:
Tests have shown a benefit to I/O when using direct-io options. This RFE requests the addition of missing options to all the volumes defined within a HC configuration.

Version-Release number of selected component (if applicable):
RHGS 3.1.3


Please add the following settings
network.remote-dio: disable
performance.strict-o-direct: on


Additional info:

Comment 5 Sachidananda Urs 2016-08-24 11:53:16 UTC
Commit: https://github.com/gluster/gdeploy/commit/99d441755 fixes the issue.

Comment 6 SATHEESARAN 2016-10-25 08:06:42 UTC
Tested with gdeploy-2.0.1-2.el7rhgs

remote-dio is disabled and strict-o-direct is enabled on all the volumes

# grep remote-dio /usr/share/doc/gdeploy/examples/hc.conf -A1
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio
value=virt,36,36,on,512MB,32,full,granular,10000,6,30,off,on,on,off
--
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio
value=virt,36,36,on,512MB,32,full,granular,10000,6,30,off,on,on,off
--
key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio
value=virt,36,36,on,512MB,32,full,granular,10000,6,30,off,on,on,off

Comment 8 errata-xmlrpc 2017-03-23 04:57:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2017-0483.html