Description of problem: ----------------------- gdeploy doesn't sets any value for ping-timeout option on the volumes Version-Release number of selected component (if applicable): ------------------------------------------------------------- gdeploy-2.0-16 How reproducible: ----------------- NA Actual results: --------------- The standard template - hc.conf - doesn't contain the value of network.ping-timeout Expected results: ----------------- volume options list should include network.ping-timeout with the value of 30 seconds
Thanks Kasturi for noticing that the gdeploy doesn't sets the network.ping-timeout to 30.
Verified with build gdeploy-2.0.1-2.el7rhgs.noarch. The standard template hc.conf contains the value of network.ping-timeout and the value for it is set to 30. key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio value=virt,36,36,on,512MB,32,full,granular,10000,6,30,off,on,on,off [root@rhsqa ~]# gluster volume info engine Volume Name: engine Type: Replicate Volume ID: edffebf5-e45f-41e5-9c76-a8013111d584 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.36.82:/rhgs/brick1/engine Brick2: 10.70.36.83:/rhgs/brick1/engine Brick3: 10.70.36.84:/rhgs/brick1/engine Options Reconfigured: performance.strict-o-direct: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [root@rhsqa ~]# gluster volume info data Volume Name: data Type: Replicate Volume ID: 7b3ec65a-7042-47c0-941f-a599f3846b85 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.36.82:/rhgs/brick2/data Brick2: 10.70.36.83:/rhgs/brick2/data Brick3: 10.70.36.84:/rhgs/brick2/data Options Reconfigured: performance.strict-o-direct: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [root@rhsqa ~]# gluster volume info vmstore Volume Name: vmstore Type: Replicate Volume ID: 97341ffa-c9dd-4685-be82-dd963cecd0af Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.36.82:/rhgs/brick3/vmstore Brick2: 10.70.36.83:/rhgs/brick3/vmstore Brick3: 10.70.36.84:/rhgs/brick3/vmstore Options Reconfigured: performance.strict-o-direct: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0260.html