Description of problem: when there is loss in quorum then it should block all the operation. currently can able to do few operation for eg. volume set, add-brick, remove-brick when there is loss in quorum. Version-Release number of selected component (if applicable): Mainline How reproducible: Almost always Steps to Reproduce: 1. Create distribute volume with two node in cluster. 2. do gluster volume set cluster.server-quorum-type: server 3. set quorum ratio cluster.server-quorum-ratio: 51 4. bring down any one of the node in cluster 2. perform add-brick, volume set, remove-brick operation in node which is currently active. 3. Its allowing these operation even there is loss in quorum. Actual results: Its successfully allowing these operation even if there is loss in quorum Expected results: It should give error and should not allow any operation when quorum is not met.
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd sync-op framework) posted (#1) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd sync-op framework) posted (#2) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd sync-op framework) posted (#3) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd syncop framework) posted (#4) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd syncop framework) posted (#5) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/9422 (glusterd: quorum calculation should happen on global peer_list) posted (#3) for review on master by Atin Mukherjee (amukherj)
COMMIT: http://review.gluster.org/9422 committed in master by Krishnan Parthasarathi (kparthas) ------ commit 9d37406b59fc33940c8e4e925ef9803b2d9b6507 Author: Atin Mukherjee <amukherj> Date: Fri Jan 9 10:15:04 2015 +0530 glusterd: quorum calculation should happen on global peer_list Apart from snapshot, for all other transactions quorum should be calculated on global peer list. Change-Id: I30bacdb6521b0c6fd762be84d3b7aa40d00aacc4 BUG: 1177132 Signed-off-by: Atin Mukherjee <amukherj> Reviewed-on: http://review.gluster.org/9422 Reviewed-by: Kaushal M <kaushal> Reviewed-by: Gaurav Kumar Garg <ggarg> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas>
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd syncop framework) posted (#6) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd syncop framework) posted (#7) for review on master by Gaurav Kumar Garg (ggarg)
REVIEW: http://review.gluster.org/9349 (glusterd: quorum validatation in glusterd syncop framework) posted (#8) for review on master by Gaurav Kumar Garg (ggarg)
COMMIT: http://review.gluster.org/9349 committed in master by Krishnan Parthasarathi (kparthas) ------ commit 30ad195d49b971a5389d37c9d9a3583186f3d54a Author: GauravKumarGarg <ggarg> Date: Wed Dec 24 16:39:03 2014 +0530 glusterd: quorum validatation in glusterd syncop framework Previously glusterd was not checking quorum validation in syncop framework. So when there is loss in quorum then few operation (for eg. add-brick, remove-brick, volume set) which is based on syncop framework passed successfully with out doing quorum validation check. With this change it will do quorum validation in syncop framework and it will block all operation (except volume set <quorum options> and "volume reset all" commands) when there is loss in quorum. Change-Id: I4c2ef16728d55c98a228bb86795023d9c1f4e9fb BUG: 1177132 Signed-off-by: Gaurav Kumar Garg <ggarg> Reviewed-on: http://review.gluster.org/9349 Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Atin Mukherjee <amukherj> Reviewed-by: Krishnan Parthasarathi <kparthas> Tested-by: Krishnan Parthasarathi <kparthas>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report. glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user