Description of problem: In a volume /w quorum enabled, I'd expect reads to fail when the available servers cannot form a quorum. Version-Release number of selected component (if applicable): How reproducible: Every time. Steps to Reproduce: Run a script like: gluster volume create vol1 replica 3 192.168.124.21:/mnt/bricks/vol1 192.168.124.22:/mnt/bricks/vol1 192.168.124.23:/mnt/bricks/vol1 gluster volume set vol1 cluster.quorum-type auto gluster volume start vol1 mkdir -p /mnt/vol1/ mount -t glusterfs 192.168.124.21:/vol1 /mnt/vol1 cd /mnt/vol1 echo "update 1" > test.txt cat test.txt ssh 192.168.124.21 service glusterd stop echo "update 2" > test.txt cat test.txt ssh 192.168.124.21 service glusterd start ssh 192.168.124.22 service glusterd stop ssh 192.168.124.23 service glusterd stop sleep 20 cat test.txt Actual results: Creation of volume vol1 has been successful. Please start the volume to access data. Set volume successful Starting volume vol1 has been successful update 1 Redirecting to /bin/systemctl stop glusterd.service update 2 Redirecting to /bin/systemctl start glusterd.service Redirecting to /bin/systemctl stop glusterd.service Redirecting to /bin/systemctl stop glusterd.service update 1 Expected results: Creation of volume vol1 has been successful. Please start the volume to access data. Set volume successful Starting volume vol1 has been successful update 1 Redirecting to /bin/systemctl stop glusterd.service update 2 Redirecting to /bin/systemctl start glusterd.service Redirecting to /bin/systemctl stop glusterd.service Redirecting to /bin/systemctl stop glusterd.service cat: test.txt: Transport endpoint is not connected Additional info:
Chatting about this in IRC and seems to me like this is a feature request for an option determining whether to go read-only or totally offline when a client can't reach a majority of servers. Something like "cluster.quorum-loss readonly|offline" perhaps.
Hi Louis, Yeah that should work. BTW keep in mind that since the number of server in the write quorum is configurable, the number of servers needed in the read quorum will be dependent on that configuration. To get consistent reads, you would need (N-(cluster.quorum-count))+1 servers in the read quorum.
Feature requests make most sense against the 'mainline' release, there is no ETA for an implementation and requests might get forgotten when filed against a particular version.
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice. If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.