RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1186692 - cluster node removal should verify possible loss of quorum
Summary: cluster node removal should verify possible loss of quorum
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pcs
Version: 7.1
Hardware: Unspecified
OS: Unspecified
medium
unspecified
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On: 1180506
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-01-28 10:22 UTC by Radek Steiger
Modified: 2015-11-19 09:34 UTC (History)
3 users (show)

Fixed In Version: pcs-0.9.140-1.el7
Doc Type: Bug Fix
Doc Text:
Cause: User removes a node from a cluster where some nodes are not running. Consequence: Cluster loses a quorum. Fix: Detect whether removing a node will result in a loss of the quorum and do not remove the node if so. Result: User is informed that by removing the node the cluster will lose the quorum. User has to run the command with --force flag in order to remove the node.
Clone Of:
Environment:
Last Closed: 2015-11-19 09:34:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed fix (5.43 KB, patch)
2015-03-02 14:49 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:2290 0 normal SHIPPED_LIVE Moderate: pcs security, bug fix, and enhancement update 2015-11-19 09:43:53 UTC

Description Radek Steiger 2015-01-28 10:22:25 UTC
> Description of problem:

In bug 1180506 we've added a warning when stopping a node could cause loss of quorum. We should add the same warning for removing nodes, such as in a following scenario:

1. Have a 5-node cluster with 2 of the nodes being stopped
2. Remove one of the running nodes
3. Enjoy the loss of quorum without a warning


> Version-Release number of selected component (if applicable):

pcs-0.9.137-13.el7


> Actual results:

Loss of quorum.


> Expected results:

Similar warning message as in stopping a node:
"Error: Stopping the node(s) will cause a loss of the quorum, use --force to override"

Comment 1 Tomas Jelinek 2015-03-02 14:49:27 UTC
Created attachment 997107 [details]
proposed fix

Comment 2 Tomas Jelinek 2015-03-02 14:58:34 UTC
Test:

[root@rh70-node1:~]# pcs status nodes both
Corosync Nodes:
 Online: rh70-node1 rh70-node2 rh70-node3 
 Offline: 
Pacemaker Nodes:
 Online: rh70-node1 rh70-node2 rh70-node3 
 Standby: 
 Offline: 
[root@rh70-node1:~]# pcs cluster stop rh70-node2
rh70-node2: Stopping Cluster (pacemaker)...
rh70-node2: Stopping Cluster (corosync)...
[root@rh70-node1:~]# pcs cluster node remove rh70-node3
Error: Removing the node will cause a loss of the quorum, use --force to override
[root@rh70-node1:~]# echo $?
1
[root@rh70-node1:~]# pcs status nodes both
Corosync Nodes:
 Online: rh70-node1 rh70-node3 
 Offline: rh70-node2 
Pacemaker Nodes:
 Online: rh70-node1 rh70-node3 
 Standby: 
 Offline: rh70-node2 
[root@rh70-node1:~]# pcs cluster node remove rh70-node3 --force
rh70-node3: Stopping Cluster (pacemaker)...
rh70-node3: Successfully destroyed cluster
rh70-node1: Corosync updated
rh70-node2: Corosync updated
[root@rh70-node1:~]# echo $?
0
[root@rh70-node1:~]# pcs status nodes both
Corosync Nodes:
 Online: rh70-node1 
 Offline: rh70-node2 
Pacemaker Nodes:
 Online: rh70-node1 
 Standby: 
 Offline: rh70-node2

Comment 4 Tomas Jelinek 2015-06-04 14:41:49 UTC
Before Fix:
[root@rh71-node1 ~]# rpm -q pcs
pcs-0.9.137-13.el7_1.2.x86_64
[root@rh71-node1:~]# pcs status nodes both
Corosync Nodes:
 Online: rh71-node1 rh71-node2 rh71-node3 
 Offline: 
Pacemaker Nodes:
 Online: rh71-node1 rh71-node2 rh71-node3 
 Standby: 
 Offline: 
[root@rh71-node1:~]# pcs cluster stop rh71-node2
rh71-node2: Stopping Cluster (pacemaker)...
rh71-node2: Stopping Cluster (corosync)...
[root@rh71-node1:~]# pcs cluster node remove rh71-node3
rh71-node3: Stopping Cluster (pacemaker)...
rh71-node3: Successfully destroyed cluster
rh71-node1: Corosync updated
rh71-node2: Corosync updated


After Fix:
[root@rh71-node1:~]# rpm -q pcs
pcs-0.9.140-1.el6.x86_64
[root@rh71-node1:~]# pcs status nodes both
Corosync Nodes:
 Online: rh71-node1 rh71-node2 rh71-node3 
 Offline: 
Pacemaker Nodes:
 Online: rh71-node1 rh71-node2 rh71-node3 
 Standby: 
 Offline: 
[root@rh71-node1:~]# pcs cluster stop rh71-node2
rh71-node2: Stopping Cluster (pacemaker)...
rh71-node2: Stopping Cluster (corosync)...
[root@rh71-node1:~]# pcs cluster node remove rh71-node3
Error: Removing the node will cause a loss of the quorum, use --force to override
[root@rh71-node1:~]# echo $?
1
[root@rh71-node1:~]# pcs status nodes both
Corosync Nodes:
 Online: rh71-node1 rh71-node3 
 Offline: rh71-node2 
Pacemaker Nodes:
 Online: rh71-node1 rh71-node3 
 Standby: 
 Offline: rh71-node2 
[root@rh71-node1:~]# pcs cluster node remove rh71-node3 --force
rh71-node3: Stopping Cluster (pacemaker)...
rh71-node3: Successfully destroyed cluster
rh71-node1: Corosync updated
rh71-node2: Corosync updated
[root@rh71-node1:~]# echo $?
0
[root@rh71-node1:~]# pcs status nodes both
Corosync Nodes:
 Online: rh71-node1 
 Offline: rh71-node2 
Pacemaker Nodes:
 Online: rh71-node1 
 Standby: 
 Offline: rh71-node2

Comment 8 errata-xmlrpc 2015-11-19 09:34:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-2290.html


Note You need to log in before you can comment on or make changes to this bug.