Bug 1541438

Summary: quorum-reads option can give inconsistent reads
Product: [Community] GlusterFS Reporter: Karthik U S <ksubrahm>
Component: replicateAssignee: Karthik U S <ksubrahm>
Status: CLOSED UPSTREAM QA Contact:
Severity: medium Docs Contact:
Priority: medium    
Version: mainlineCC: bugs, nchilaka, pkarampu, rhs-bugs, sabose, storage-qa-internal
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1537480 Environment:
Last Closed: 2020-03-12 12:45:43 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1537480    

Comment 1 Karthik U S 2018-02-02 15:03:21 UTC
Description of problem:
For a file, Brick-A has pending operations on Brick-B, Brick-B has pending operations on Brick-C and Brick-C has pending operations on Brick-A. Since no two other bricks are blaming one brick any of these bricks can be considered as a good copy and a heal can be done. Reads will fail until heal happens.
The consistent read issue we found happens when Any one of the bricks go down in this state. If Brick-A goes down, Reads will be served from Brick-B and if Brick-B goes down Reads will be served from Brick-C. If Brick-C goes down reads will be served from Brick-A. All these reads could give different content.


Version-Release number of selected component (if applicable):


How reproducible:
It is extremely difficult to hit this case. We are mostly going to simulate it by putting breakpoints in gdb.

Comment 2 Worker Ant 2018-02-02 15:06:33 UTC
REVIEW: https://review.gluster.org/19477 (cluster/afr: Implementation of generic functions for consistent read) posted (#1) for review on master by Karthik U S

Comment 3 Worker Ant 2020-03-12 12:45:43 UTC
This bug is moved to https://github.com/gluster/glusterfs/issues/941, and will be tracked there from now on. Visit GitHub issues URL for further details