Bug 1306667 - Newly created volume start, starting the bricks when server quorum not met
Summary: Newly created volume start, starting the bricks when server quorum not met
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Atin Mukherjee
QA Contact: Byreddy
URL:
Whiteboard:
Depends On:
Blocks: 1268895 1299184 1308402
TreeView+ depends on / blocked
 
Reported: 2016-02-11 14:41 UTC by Byreddy
Modified: 2016-09-17 16:45 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.9-1
Doc Type: Bug Fix
Doc Text:
Previously, using the force option with the 'gluster volume start' command succeeded even when server-side quorum was not met. This has been corrected so that force no longer overrides a lack of server-side quorum being met.
Clone Of:
: 1308402 (view as bug list)
Environment:
Last Closed: 2016-06-23 05:07:48 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:1240 0 normal SHIPPED_LIVE Red Hat Gluster Storage 3.1 Update 3 2016-06-23 08:51:28 UTC

Description Byreddy 2016-02-11 14:41:35 UTC
Description of problem:
=======================
Had a four node cluster and created a Distributed volume using one brick and enabled the server quorum and server-quorum ratio was 90, and stopped glusterd in one of the node to achieve server quorum not met condition based on server quorum ratio set and started the volume, *it's started and bricks are online**



Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.5-19.el7rhgs

How reproducible:
=================
Every time


Steps to Reproduce:
===================
1.Have 4 node cluster (node-1..4)
2.Create a simple distributed volume using one brick
3.Enabled the server quorum 
4.Set the server quorum ration to 90
5.Stop glusterd on one of the node (eg :node-4)
6.Try to start the volume now //will start and bricks will be online

Actual results:
===============
Bricks are online when server quorum not met


Expected results:
=================
Bricks should be in offline when server quorum not met


Additional info:

Comment 3 Atin Mukherjee 2016-02-11 15:59:00 UTC
Gaurav,

Could you please check this issue?

~Atin

Comment 4 Atin Mukherjee 2016-02-11 17:59:04 UTC
This looks like a regression caused by http://review.gluster.org/12718

Comment 6 Byreddy 2016-02-12 05:29:11 UTC
Marking this bug as Regression failed based on above details

Comment 12 Gaurav Kumar Garg 2016-02-15 05:23:42 UTC
upstream patch for this bug available: http://review.gluster.org/13442

Comment 13 Atin Mukherjee 2016-02-15 05:46:55 UTC
We have a crude workaround here which is as follows:

1. Disable server side quorum by gluster volume reset <volname> cluster.server-quorum-type
2. Stop the volume
3. Turn on server side quorum by gluster volume set <volname> cluster.server-quorum-type server

Comment 20 Atin Mukherjee 2016-03-22 11:58:08 UTC
The fix is now available in rhgs-3.1.3 branch, hence moving the state to Modified.

Comment 22 Byreddy 2016-04-04 07:17:10 UTC
Verified this bug using the build "glusterfs-3.7.9-1"


Repeated the reproducing steps mentioned in description section of bug, Fix is working properly, volume start is not happening when "server side quorum not met"


Moving to verified state based on above info.

Comment 25 Atin Mukherjee 2016-06-06 06:51:02 UTC
Laura,

This bug doesn't need a doc text since its a regression. Apologies for this effort waste.

Also if you see any bugs which Gaurav has worked on, raise needinfo on me instead of Satish :)

~Atin

Comment 28 errata-xmlrpc 2016-06-23 05:07:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2016:1240


Note You need to log in before you can comment on or make changes to this bug.