+++ This bug was initially created as a clone of Bug #1064309 +++ Description of problem: all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down. If a node has 3 bricks, and if one of the slave node goes down. If out of those 3 bricks, atleast one is connected to that node on slave, then status for all the bricks also goes to faulty. Version-Release number of selected component (if applicable):glusterfs-3.4.0.59rhs-1 How reproducible: happens everytime. Steps to Reproduce: 1. create and start a geo-rep relationship between master(6x2 with 4 nodes) and slave( 6x2 with 4 nodes) 2. bring down one of the slave node. 3. wait for some time and check the status. Actual results: all the bricks of a node shows faulty in status if slave node to which atleast one of the brick connected goes down. Expected results: the only brick which is connected the died node, should be faulty.
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#3) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#4) for review on master by Saravanakumar Arumugam (sarumuga)
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#5) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#6) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#7) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#8) for review on master by Aravinda VK (avishwan)
REVIEW: http://review.gluster.org/10121 (geo-rep: Status Enhancements) posted (#9) for review on master by Aravinda VK (avishwan)
http://review.gluster.org/#/c/10121/ and http://review.gluster.org/#/c/10580/ are merged.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user