Bug 1509102

Summary: In distribute volume after glusterd restart, brick goes offline
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: akarsha <akrai>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED ERRATA QA Contact: Rajesh Madaka <rmadaka>
Severity: medium Docs Contact:
Priority: medium    
Version: rhgs-3.3CC: akrai, nchilaka, rhinduja, rhs-bugs, rmadaka, sheggodu, storage-qa-internal, vbellur
Target Milestone: ---   
Target Release: RHGS 3.4.0   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.12.2-2 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1509845 (view as bug list) Environment:
Last Closed: 2018-09-04 06:38:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1509845, 1511293, 1511301    
Bug Blocks: 1503134    

Description akarsha 2017-11-03 04:50:37 UTC
Description of problem:
After glusterd restart on same node, brick goes offline.

Version-Release number of selected component (if applicable):
3.8.4-50

How reproducible:
3/3

Steps to Reproduce:
1. Created a distribute volume with 3 bricks of each node and start it.
2. Stopped glusterd on other two node and check the volume status where glusterd is running.
3. Restart glusterd on node where glusterd is running and check volume status.

Actual results:
Before restart glusterd

Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/testvol    49160     0          Y       17734
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks

After restart glusterd

Status of volume: testvol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.52:/bricks/brick0/testvol    N/A       N/A        N       N/A  
 
Task Status of Volume testvol
------------------------------------------------------------------------------
There are no active volume tasks


Expected results:
Brick must be online after restart glusterd.

Additional info:
Glusterd is stopped on other two nodes.

Comment 4 Atin Mukherjee 2017-11-06 08:03:29 UTC
upstream patch : https://review.gluster.org/18669

Comment 7 Atin Mukherjee 2018-01-03 13:34:11 UTC
There's an issue with this patch as it causes regression to brick multiplexing node reboot scenario. One more patch https://review.gluster.org/19134 is required to fix this completely.

Comment 9 Rajesh Madaka 2018-02-16 10:29:28 UTC
Verified this bug for distributed volume and replica3 volume with 6 node cluster.

verified scenario:

-> Created distribute volume with each brick from each node in 6 node cluster.
-> Then stop the glusterd service of 5 nodes
-> Then verified gluster volume status from where glusterd is running.
-> Volume status showing correct and brick is online from which node glusterd is running
-> Then restarted glusterd service and verified gluster vol status from where glusterd is running.
-> Gluster volume status showing correct and brick is online.

Same steps followed for replica3 volume also, verified for replica3 volume.

Moving this bug to verified state

 verified version : glusterfs-3.12.2-4

Comment 11 errata-xmlrpc 2018-09-04 06:38:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2607