Bug 949096 - [FEAT] : Inconsistent read on volume configured with cluster.quorum-type auto
Summary: [FEAT] : Inconsistent read on volume configured with cluster.quorum-type auto
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: replicate
Version: mainline
Hardware: i686
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-04-05 21:24 UTC by Hiram Chirino
Modified: 2015-10-22 15:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-22 15:46:38 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Hiram Chirino 2013-04-05 21:24:10 UTC
Description of problem:

In a volume /w quorum enabled, I'd expect reads to fail when the available servers cannot form a quorum.

Version-Release number of selected component (if applicable):


How reproducible:

Every time.

Steps to Reproduce:

Run a script like:
gluster volume create vol1 replica 3 192.168.124.21:/mnt/bricks/vol1 192.168.124.22:/mnt/bricks/vol1  192.168.124.23:/mnt/bricks/vol1 
gluster volume set vol1 cluster.quorum-type auto
gluster volume start vol1

mkdir -p /mnt/vol1/
mount -t glusterfs 192.168.124.21:/vol1 /mnt/vol1
cd /mnt/vol1

echo "update 1" > test.txt
cat test.txt
ssh 192.168.124.21 service glusterd stop
echo "update 2" > test.txt
cat test.txt
ssh 192.168.124.21 service glusterd start
ssh 192.168.124.22 service glusterd stop
ssh 192.168.124.23 service glusterd stop
sleep 20
cat test.txt


Actual results:

Creation of volume vol1 has been successful. Please start the volume to access data.
Set volume successful
Starting volume vol1 has been successful
update 1
Redirecting to /bin/systemctl stop  glusterd.service
update 2
Redirecting to /bin/systemctl start  glusterd.service
Redirecting to /bin/systemctl stop  glusterd.service
Redirecting to /bin/systemctl stop  glusterd.service
update 1

Expected results:

Creation of volume vol1 has been successful. Please start the volume to access data.
Set volume successful
Starting volume vol1 has been successful
update 1
Redirecting to /bin/systemctl stop  glusterd.service
update 2
Redirecting to /bin/systemctl start  glusterd.service
Redirecting to /bin/systemctl stop  glusterd.service
Redirecting to /bin/systemctl stop  glusterd.service
cat: test.txt: Transport endpoint is not connected

Additional info:

Comment 1 Louis Zuckerman 2013-04-05 21:57:22 UTC
Chatting about this in IRC and seems to me like this is a feature request for an option determining whether to go read-only or totally offline when a client can't reach a majority of servers.

Something like "cluster.quorum-loss readonly|offline" perhaps.

Comment 2 Hiram Chirino 2013-04-08 13:03:33 UTC
Hi Louis,

Yeah that should work.  

BTW keep in mind that since the number of server in the write quorum is configurable, the number of servers needed in the read quorum will be dependent on that configuration.  To get consistent reads, you would need (N-(cluster.quorum-count))+1 servers in the read quorum.

Comment 3 Niels de Vos 2014-11-27 14:45:15 UTC
Feature requests make most sense against the 'mainline' release, there is no ETA for an implementation and requests might get forgotten when filed against a particular version.

Comment 4 Kaleb KEITHLEY 2015-10-22 15:46:38 UTC
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.


Note You need to log in before you can comment on or make changes to this bug.