Bug 1212410

Summary: dist-geo-rep : all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down.
Product: [Community] GlusterFS Reporter: Aravinda VK <avishwan>
Component: geo-replicationAssignee: Aravinda VK <avishwan>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: high    
Version: mainlineCC: aavati, avishwan, bugs, csaba, david.macdonald, dpati, gluster-bugs, nlevinki, nsathyan, storage-qa-internal, vkoppad
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard: status
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1064309
: 1218586 (view as bug list) Environment:
Last Closed: 2016-06-16 12:52:08 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1064309    
Bug Blocks: 1218586    

Description Aravinda VK 2015-04-16 11:04:10 UTC
+++ This bug was initially created as a clone of Bug #1064309 +++

Description of problem:  all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down. If a node has 3 bricks, and if one of the slave node goes down. If out of those 3 bricks, atleast one is connected to that node on slave, then status for all the bricks also goes to faulty. 


Version-Release number of selected component (if applicable):glusterfs-3.4.0.59rhs-1


How reproducible: happens everytime.


Steps to Reproduce:
1. create and start a geo-rep relationship between master(6x2 with 4 nodes) and slave( 6x2 with 4 nodes)  
2. bring down one of the slave node. 
3. wait for some time and check the status. 

Actual results: all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down.


Expected results: the only brick which is connected the died node, should be faulty.

Comment 1 Anand Avati 2015-04-16 11:20:48 UTC
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#3) for review on master by Aravinda VK (avishwan)

Comment 2 Anand Avati 2015-04-17 06:19:39 UTC
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#4) for review on master by Saravanakumar Arumugam (sarumuga)

Comment 3 Anand Avati 2015-04-27 13:17:58 UTC
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#5) for review on master by Aravinda VK (avishwan)

Comment 4 Anand Avati 2015-04-28 08:21:43 UTC
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#6) for review on master by Aravinda VK (avishwan)

Comment 5 Anand Avati 2015-04-30 07:04:28 UTC
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#7) for review on master by Aravinda VK (avishwan)

Comment 6 Anand Avati 2015-05-02 11:59:55 UTC
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#8) for review on master by Aravinda VK (avishwan)

Comment 7 Anand Avati 2015-05-04 07:09:53 UTC
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#9) for review on master by Aravinda VK (avishwan)

Comment 8 Aravinda VK 2015-05-10 03:33:05 UTC
http://review.gluster.org/#/c/10121/ and http://review.gluster.org/#/c/10580/ are merged.

Comment 9 Aravinda VK 2015-05-18 10:46:33 UTC
This bug is getting closed because a release has been made available that
should address the reported issue. In case the problem is still not fixed with
glusterfs-3.7.0, please open a new bug report.

Comment 10 Niels de Vos 2016-06-16 12:52:08 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user