Hide Forgot
pcs warns user about losing a quorum upon node removal even though it's not true. In scenario shown below we have a 4 node cluster setup with qdevice, removing a node results in vote recalculation and quorum is not lost. Tested with: pcs-0.9.152-10.el7.x86_64 # pcs status Cluster name: STSRHTS28875 Stack: corosync Current DC: virt-281 (version 1.1.15-11.el7-e174ec8) - partition with quorum Last updated: Thu Sep 29 11:26:46 2016 Last change: Thu Sep 29 11:26:05 2016 by hacluster via crmd on virt-281 4 nodes and 12 resources configured Online: [ virt-279 virt-280 virt-281 virt-282 ] Full list of resources: fence-virt-279 (stonith:fence_xvm): Started virt-280 fence-virt-280 (stonith:fence_xvm): Started virt-281 fence-virt-281 (stonith:fence_xvm): Started virt-282 fence-virt-282 (stonith:fence_xvm): Started virt-279 Clone Set: dlm-clone [dlm] Started: [ virt-279 virt-280 virt-281 virt-282 ] Clone Set: clvmd-clone [clvmd] Started: [ virt-279 virt-280 virt-281 virt-282 ] Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled # pcs quorum status Quorum information ------------------ Date: Thu Sep 29 11:26:51 2016 Quorum provider: corosync_votequorum Nodes: 4 Node ID: 2 Ring ID: 1/160 Quorate: Yes Votequorum information ---------------------- Expected votes: 7 Highest expected: 7 Total votes: 7 Quorum: 4 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 1 1 A,NV,NMW virt-279 2 1 A,V,NMW virt-280 (local) 3 1 A,V,NMW virt-281 4 1 A,V,NMW virt-282 0 3 Qdevice >False positive message when trying to remove node: # pcs cluster node remove virt-279 Error: Removing the node will cause a loss of the quorum, use --force to override # pcs cluster node remove virt-279 --force virt-279: Stopping Cluster (pacemaker)... virt-279: Successfully destroyed cluster virt-280: Corosync updated virt-281: Corosync updated virt-282: Corosync updated # pcs status Cluster name: STSRHTS28875 Stack: corosync Current DC: virt-281 (version 1.1.15-11.el7-e174ec8) - partition with quorum Last updated: Thu Sep 29 11:27:34 2016 Last change: Thu Sep 29 11:27:24 2016 by root via crm_node on virt-280 3 nodes and 10 resources configured Online: [ virt-280 virt-281 virt-282 ] Full list of resources: fence-virt-279 (stonith:fence_xvm): Started virt-280 fence-virt-280 (stonith:fence_xvm): Started virt-281 fence-virt-281 (stonith:fence_xvm): Started virt-282 fence-virt-282 (stonith:fence_xvm): Started virt-280 Clone Set: dlm-clone [dlm] Started: [ virt-280 virt-281 virt-282 ] Clone Set: clvmd-clone [clvmd] Started: [ virt-280 virt-281 virt-282 ] Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled # pcs quorum status Quorum information ------------------ Date: Thu Sep 29 11:27:36 2016 Quorum provider: corosync_votequorum Nodes: 3 Node ID: 2 Ring ID: 2/164 Quorate: Yes Votequorum information ---------------------- Expected votes: 5 Highest expected: 5 Total votes: 5 Quorum: 3 Flags: Quorate Qdevice Membership information ---------------------- Nodeid Votes Qdevice Name 2 1 A,V,NMW virt-280 (local) 3 1 A,V,NMW virt-281 4 1 A,V,NMW virt-282 0 2 Qdevice
I am not able to reproduce this. Based on discussion with the reporter I am closing this as works for me.