Bug 1298439

Summary: GlusterD restart, starting the bricks when server quorum not met
Product: [Community] GlusterFS Reporter: Atin Mukherjee <amukherj>
Component: glusterdAssignee: Atin Mukherjee <amukherj>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: mainlineCC: bsrirama, bugs, sasundar
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.8rc2 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1298068
: 1305256 (view as bug list) Environment:
Last Closed: 2016-06-16 13:54:41 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1298068    
Bug Blocks: 1305256    

Description Atin Mukherjee 2016-01-14 05:51:19 UTC
+++ This bug was initially created as a clone of Bug #1298068 +++

Description of problem:
=======================
Had 5 node cluster (n1, n2, n3, n4 & n5 ) with one distributed volume with server quorum enabled  and stopped glusterd in 3 nodes (n3,n4 and n5) and checked the volume status in n1 node, the bricks were offline and restarted the glusterd on that node (n1) and checked the volume status again, this time it bricks are in online.

Version-Release number of selected component (if applicable):
==============================================================
glusterfs-3.7.5-15


How reproducible:
=================
Always

Steps to Reproduce:
===================
1. Have 5 node cluster with one distributed volume
2. Enable the server quorum
3. Bring down 3 nodes ( Eg , n3, n4 and n5)
4. Check the volume status in node-1 (n1) // bricks will be in offline state
5. Restart glusterd on node-1
6. Check the volume status // bricks will be in online state

Actual results:
===============
bricks are in online when server quorum not met


Expected results:
=================
Bricks should be in offline state when server quorum not met

Comment 1 Vijay Bellur 2016-01-14 05:52:58 UTC
REVIEW: http://review.gluster.org/13236 (glusterd: check quorum on restart bricks) posted (#1) for review on master by Atin Mukherjee (amukherj)

Comment 2 Vijay Bellur 2016-01-26 15:02:08 UTC
REVIEW: http://review.gluster.org/13236 (glusterd: check quorum on restart bricks) posted (#2) for review on master by Atin Mukherjee (amukherj)

Comment 3 Vijay Bellur 2016-02-03 03:29:47 UTC
REVIEW: http://review.gluster.org/13236 (glusterd: check quorum on restart bricks) posted (#3) for review on master by Atin Mukherjee (amukherj)

Comment 4 Vijay Bellur 2016-02-05 15:26:46 UTC
COMMIT: http://review.gluster.org/13236 committed in master by Jeff Darcy (jdarcy) 
------
commit 2fe4f758f4f32151ef22d644c4de1e58a508fc3e
Author: Atin Mukherjee <amukherj>
Date:   Thu Jan 14 11:11:45 2016 +0530

    glusterd: check quorum on restart bricks
    
    While spawning bricks on a glusterd restart the quorum should be checked and
    brick shouldn't be started if the volume doesn't meet quorum.
    
    Change-Id: I21bf9055bdf38c53c81138cc204ba05a9ff6444f
    BUG: 1298439
    Signed-off-by: Atin Mukherjee <amukherj>
    Reviewed-on: http://review.gluster.org/13236
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: Jeff Darcy <jdarcy>

Comment 5 Niels de Vos 2016-06-16 13:54:41 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report.

glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user