Bug 1346244
Summary: | HCI gdeploy answer file should set the value network.ping-timeout to 30 for all the volumes | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | SATHEESARAN <sasundar> |
Component: | gdeploy | Assignee: | Devyani Kota <dkota> |
Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | rhgs-3.1 | CC: | amukherj, knarra, rcyriac, rhinduja, sabose, smohan, surs |
Target Milestone: | --- | Keywords: | ZStream |
Target Release: | RHGS 3.1.3 Async | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | gdeploy-2.0.1-1 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2017-02-07 11:33:41 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1277939, 1351522 |
Description
SATHEESARAN
2016-06-14 11:07:05 UTC
Thanks Kasturi for noticing that the gdeploy doesn't sets the network.ping-timeout to 30. Verified with build gdeploy-2.0.1-2.el7rhgs.noarch. The standard template hc.conf contains the value of network.ping-timeout and the value for it is set to 30. key=group,storage.owner-uid,storage.owner-gid,features.shard,features.shard-block-size,performance.low-prio-threads,cluster.data-self-heal-algorithm,cluster.locking-scheme,cluster.shd-wait-qlength,cluster.shd-max-threads,network.ping-timeout,user.cifs,nfs.disable,performance.strict-o-direct,network.remote-dio value=virt,36,36,on,512MB,32,full,granular,10000,6,30,off,on,on,off [root@rhsqa ~]# gluster volume info engine Volume Name: engine Type: Replicate Volume ID: edffebf5-e45f-41e5-9c76-a8013111d584 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.36.82:/rhgs/brick1/engine Brick2: 10.70.36.83:/rhgs/brick1/engine Brick3: 10.70.36.84:/rhgs/brick1/engine Options Reconfigured: performance.strict-o-direct: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [root@rhsqa ~]# gluster volume info data Volume Name: data Type: Replicate Volume ID: 7b3ec65a-7042-47c0-941f-a599f3846b85 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.36.82:/rhgs/brick2/data Brick2: 10.70.36.83:/rhgs/brick2/data Brick3: 10.70.36.84:/rhgs/brick2/data Options Reconfigured: performance.strict-o-direct: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on [root@rhsqa ~]# gluster volume info vmstore Volume Name: vmstore Type: Replicate Volume ID: 97341ffa-c9dd-4685-be82-dd963cecd0af Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 10.70.36.82:/rhgs/brick3/vmstore Brick2: 10.70.36.83:/rhgs/brick3/vmstore Brick3: 10.70.36.84:/rhgs/brick3/vmstore Options Reconfigured: performance.strict-o-direct: on user.cifs: off network.ping-timeout: 30 cluster.shd-max-threads: 8 cluster.shd-wait-qlength: 10000 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full performance.low-prio-threads: 32 features.shard-block-size: 512MB features.shard: on storage.owner-gid: 36 storage.owner-uid: 36 cluster.server-quorum-type: server cluster.quorum-type: auto network.remote-dio: off cluster.eager-lock: enable performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: on nfs.disable: on Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2017-0260.html |