Bug 1556670

Summary: glusterd fails to attach brick during restart of the node
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Atin Mukherjee <amukherj>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED ERRATA QA Contact: Bala Konda Reddy M <bmekala>
Severity: high Docs Contact:
Priority: urgent    
Version: rhgs-3.3CC: amukherj, asriram, chpai, mchangir, nchilaka, olim, rcyriac, rhinduja, rhs-bugs, rmadaka, sheggodu, storage-qa-internal, vbellur
Target Milestone: ---Keywords: ZStream
Target Release: RHGS 3.3.1 Async   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: brick-multiplexing
Fixed In Version: glusterfs-3.8.4-54.3 Doc Type: Bug Fix
Doc Text:
Previously, on a cluster with brick multiplexing enabled, if the gluster instance on one of the nodes went down while volume operations were underway on other nodes, restarting that gluster instance did not bring up the bricks. In this scenario, with this fix, the bricks successfully come up.
Story Points: ---
Clone Of: 1540600 Environment:
Last Closed: 2018-03-26 05:39:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1540607    
Bug Blocks:    

Comment 6 Rajesh Madaka 2018-03-23 05:57:52 UTC
Verified in glusterfs-3.8.4-54.3

tried scenario:

1. Create a 3 node cluster, enable brick multiplexing and setup 20 1 X 3 volumes and start them.
2. Now bring down glusterd on first node and perform volume set operation for all 20 volumes from any of the other nodes.
3. bring back glusterd instance on 1st node.

tried with node down scenarios as well

After glusterd or node is up all bricks are coming back to online with brickmux enabled.

Comment 14 errata-xmlrpc 2018-03-26 05:39:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0580