Bug 1449867

Summary: [GSS] glusterd fails to start
Product: Red Hat Gluster Storage Reporter: Oonkwee Lim_ <olim>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED ERRATA QA Contact: Bala Konda Reddy M <bmekala>
Severity: high Docs Contact:
Priority: high    
Version: rhgs-3.2CC: amukherj, nchilaka, olim, rhinduja, rhs-bugs, rmetrich, sheggodu, srmukher, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: rebase
Fixed In Version: glusterfs-3.12.2-1 Doc Type: Bug Fix
Doc Text:
Earlier in case of a node reboot, if the network interface takes time to come up before glusterd service, glusterd used to fail to resolve the brick addresses which belong to different peers resulting glusterd service to fail to come up. With this fix, even though the network interface is in process of coming up, glusterd will not fail to come up.
Story Points: ---
Clone Of:
: 1472267 1482844 1482857 (view as bug list) Environment:
Last Closed: 2018-09-04 06:32:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Bug Depends On: 1472267, 1482835    
Bug Blocks: 1472361, 1482844, 1482857, 1503135    

Comment 14 Samikshan Bairagya 2017-07-26 10:40:26 UTC
Upstream patch: https://review.gluster.org/#/c/17813/

Comment 18 Bala Konda Reddy M 2018-05-03 12:07:37 UTC
Build: 3.12.2-8

Stop glusterd and bring down the NIC of the machine.(ifdown <interface>)
Now started glusterd and without any issues, glusterd is able to start.

Marking it to verified according to comment 14

Comment 20 errata-xmlrpc 2018-09-04 06:32:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.