Bug 1346244 - HCI gdeploy answer file should set the value network.ping-timeout to 30 for all the volumes
Summary: HCI gdeploy answer file should set the value network.ping-timeout to 30 for a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: gdeploy
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3 Async
Assignee: Devyani Kota
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: Gluster-HC-2 1351522
TreeView+ depends on / blocked
 
Reported: 2016-06-14 11:07 UTC by SATHEESARAN
Modified: 2017-03-07 17:43 UTC (History)
7 users (show)

Fixed In Version: gdeploy-2.0.1-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-02-07 11:33:41 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0260 0 normal SHIPPED_LIVE Important: ansible and gdeploy security and bug fix update 2017-02-07 16:32:47 UTC

Description SATHEESARAN 2016-06-14 11:07:05 UTC
Description of problem:
-----------------------
gdeploy doesn't sets any value for ping-timeout option on the volumes

Version-Release number of selected component (if applicable):
-------------------------------------------------------------
gdeploy-2.0-16

How reproducible:
-----------------
NA

Actual results:
---------------
The standard template - hc.conf - doesn't contain the value of network.ping-timeout

Expected results:
-----------------
volume options list should include network.ping-timeout with the value of 30 seconds

Comment 2 SATHEESARAN 2016-06-14 11:09:04 UTC
Thanks Kasturi for noticing that the gdeploy doesn't sets the network.ping-timeout to 30.

Comment 4 RamaKasturi 2016-09-30 09:12:17 UTC
Verified with build gdeploy-2.0.1-2.el7rhgs.noarch.

The standard template  hc.conf contains the value of network.ping-timeout and the value for it is set to 30.

key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio
value=virt,36,36,on,512MB,32,full,granular,10000,6,30,off,on,on,off


[root@rhsqa ~]# gluster volume info engine
 
Volume Name: engine
Type: Replicate
Volume ID: edffebf5-e45f-41e5-9c76-a8013111d584
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.82:/rhgs/brick1/engine
Brick2: 10.70.36.83:/rhgs/brick1/engine
Brick3: 10.70.36.84:/rhgs/brick1/engine
Options Reconfigured:
performance.strict-o-direct: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: off
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on


[root@rhsqa ~]# gluster volume info data
 
Volume Name: data
Type: Replicate
Volume ID: 7b3ec65a-7042-47c0-941f-a599f3846b85
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.82:/rhgs/brick2/data
Brick2: 10.70.36.83:/rhgs/brick2/data
Brick3: 10.70.36.84:/rhgs/brick2/data
Options Reconfigured:
performance.strict-o-direct: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: off
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on


[root@rhsqa ~]# gluster volume info vmstore
 
Volume Name: vmstore
Type: Replicate
Volume ID: 97341ffa-c9dd-4685-be82-dd963cecd0af
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.70.36.82:/rhgs/brick3/vmstore
Brick2: 10.70.36.83:/rhgs/brick3/vmstore
Brick3: 10.70.36.84:/rhgs/brick3/vmstore
Options Reconfigured:
performance.strict-o-direct: on
user.cifs: off
network.ping-timeout: 30
cluster.shd-max-threads: 8
cluster.shd-wait-qlength: 10000
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
performance.low-prio-threads: 32
features.shard-block-size: 512MB
features.shard: on
storage.owner-gid: 36
storage.owner-uid: 36
cluster.server-quorum-type: server
cluster.quorum-type: auto
network.remote-dio: off
cluster.eager-lock: enable
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

Comment 6 errata-xmlrpc 2017-02-07 11:33:41 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0260.html


Note You need to log in before you can comment on or make changes to this bug.