Bug 1313017 - Quorum is not met and writes are restricted with quorum-type auto in a 3 node system
Quorum is not met and writes are restricted with quorum-type auto in a 3 node...
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: replicate (Show other bugs)
3.7.6
x86_64 Linux
medium Severity high
: ---
: ---
Assigned To: Pranith Kumar K
: Triaged
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-02-29 13:02 EST by Christian Petersen
Modified: 2017-03-08 05:55 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-03-08 05:55:48 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Christian Petersen 2016-02-29 13:02:21 EST
Description of problem:
My cluster has 3 replicated nodes and was set to quorum-type auto.  I am using nfs-ganesha to share via NFS 3 to an ESXi cluster for shared VM storage.  I initiated a test failure to one of the bricks.  The nfs-ganesha gluster client log stated that writes were not allowed as quorum had not been met.

After switching to quorum-type fixed and quorum count of 2, the problem still remained.  It was only after switching to an undesirable quorum-count of 1 that writes were allowed again.

I tore down the entire cluster and recreated it.  This time I started out with the quorum-type set to fixed and the quorum-count set to 2.  I initiated another test failure to one of the bricks and was able to keep writing to the cluster properly.

Version-Release number of selected component (if applicable):
nfs-ganesha 3.7.6

How reproducible:
This happened twice through troubleshooting as I tore down and recreated the cluster.  I have not confirmed how reproducible this is intentionally.

Steps to Reproduce:
1.  Create replicated 3 node cluster
2.  Set quorum-type to auto
3.  Halt one of the bricks and test writing to the cluster

Actual results:
Quorum is broken when one brick out of 3 is lost

Expected results:
Quorum should be maintained as 2 of 3 bricks are still alive
Comment 2 Kaushal 2017-03-08 05:55:48 EST
This bug is getting closed because GlusteFS-3.7 has reached its end-of-life.

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please reopen this bug against the newer release.

Note You need to log in before you can comment on or make changes to this bug.