+++ This bug was initially created as a clone of Bug #1298068 +++
Description of problem:
Had 5 node cluster (n1, n2, n3, n4 & n5 ) with one distributed volume with server quorum enabled and stopped glusterd in 3 nodes (n3,n4 and n5) and checked the volume status in n1 node, the bricks were offline and restarted the glusterd on that node (n1) and checked the volume status again, this time it bricks are in online.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Have 5 node cluster with one distributed volume
2. Enable the server quorum
3. Bring down 3 nodes ( Eg , n3, n4 and n5)
4. Check the volume status in node-1 (n1) // bricks will be in offline state
5. Restart glusterd on node-1
6. Check the volume status // bricks will be in online state
bricks are in online when server quorum not met
Bricks should be in offline state when server quorum not met
REVIEW: http://review.gluster.org/13236 (glusterd: check quorum on restart bricks) posted (#1) for review on master by Atin Mukherjee (firstname.lastname@example.org)
REVIEW: http://review.gluster.org/13236 (glusterd: check quorum on restart bricks) posted (#2) for review on master by Atin Mukherjee (email@example.com)
REVIEW: http://review.gluster.org/13236 (glusterd: check quorum on restart bricks) posted (#3) for review on master by Atin Mukherjee (firstname.lastname@example.org)
COMMIT: http://review.gluster.org/13236 committed in master by Jeff Darcy (email@example.com)
Author: Atin Mukherjee <firstname.lastname@example.org>
Date: Thu Jan 14 11:11:45 2016 +0530
glusterd: check quorum on restart bricks
While spawning bricks on a glusterd restart the quorum should be checked and
brick shouldn't be started if the volume doesn't meet quorum.
Signed-off-by: Atin Mukherjee <email@example.com>
Smoke: Gluster Build System <firstname.lastname@example.org>
NetBSD-regression: NetBSD Build System <email@example.com>
CentOS-regression: Gluster Build System <firstname.lastname@example.org>
Reviewed-by: Jeff Darcy <email@example.com>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.
glusterfs-3.8.0 has been announced on the Gluster mailinglists , packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist  and the update infrastructure for your distribution.