Bug 1556670 - glusterd fails to attach brick during restart of the node
Summary: glusterd fails to attach brick during restart of the node
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: RHGS 3.3.1 Async
Assignee: Atin Mukherjee
QA Contact: Bala Konda Reddy M
URL:
Whiteboard: brick-multiplexing
Depends On: 1540607
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-15 02:47 UTC by Atin Mukherjee
Modified: 2021-06-10 15:20 UTC (History)
13 users (show)

Fixed In Version: glusterfs-3.8.4-54.3
Doc Type: Bug Fix
Doc Text:
Previously, on a cluster with brick multiplexing enabled, if the gluster instance on one of the nodes went down while volume operations were underway on other nodes, restarting that gluster instance did not bring up the bricks. In this scenario, with this fix, the bricks successfully come up.
Clone Of: 1540600
Environment:
Last Closed: 2018-03-26 05:39:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0580 0 None None None 2018-03-26 05:39:30 UTC

Comment 6 Rajesh Madaka 2018-03-23 05:57:52 UTC
Verified in glusterfs-3.8.4-54.3

tried scenario:

1. Create a 3 node cluster, enable brick multiplexing and setup 20 1 X 3 volumes and start them.
2. Now bring down glusterd on first node and perform volume set operation for all 20 volumes from any of the other nodes.
3. bring back glusterd instance on 1st node.

tried with node down scenarios as well

After glusterd or node is up all bricks are coming back to online with brickmux enabled.

Comment 14 errata-xmlrpc 2018-03-26 05:39:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0580


Note You need to log in before you can comment on or make changes to this bug.