Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
* Previously, pcs stopped cluster nodes sequentially one at a time, which caused the cluster resources to be moved from one node to another pointlessly. Consequently, the stop operation took a long time to finish. Also, losing the quorum during the process could result in node fencing. With this update, pcs stops the nodes simultaneously, preventing the resources from being moved around pointlessly and speeding up the stop operation. In addition, pcs prints a warning if stopping the nodes would cause the cluster to lose the quorum. To stop the nodes in this situation, the user is required to add the "--force"
option. (BZ#1174801, BZ#1184763)
Description of problem:
Pcs starts and stops cluster nodes sequentially. This leads to slow cluster start as starting node waits for the rest of the nodes, gives up, then another node is started and so on. Similarly when stopping a cluster sequentially pacemaker moves resources from node to node as the nodes are being stopped.
Version-Release number of selected component (if applicable):
pcs-0.9.123-9.el6
How reproducible:
Always
Steps to Reproduce:
1. pcs cluster setup --name cluster66 rh66-node1 rh66-node2 --start
2. pcs cluster stop --all
3. pcs cluster start --all
Actual results:
[root@rh66-node1:~]# time pcs cluster setup --name cluster66 rh66-node1 rh66-node2 --start
rh66-node1: Updated cluster.conf...
rh66-node2: Updated cluster.conf...
Starting cluster on nodes: rh66-node1, rh66-node2...
rh66-node1: Starting Cluster...
rh66-node2: Starting Cluster...
real 1m8.530s
user 0m0.478s
sys 0m0.118s
[root@rh66-node1:~]# pcs cluster stop --all
rh66-node1: Stopping Cluster...
rh66-node2: Stopping Cluster...
[root@rh66-node1:~]# time pcs cluster start --all
rh66-node1: Starting Cluster...
rh66-node2: Starting Cluster...
real 1m7.936s
user 0m0.081s
sys 0m0.015s
Expected results:
when running simultaneously on both nodes:
[root@rh66-node1:~]# time pcs cluster start
Starting Cluster...
real 0m10.702s
user 0m0.192s
sys 0m0.227s
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://rhn.redhat.com/errata/RHBA-2015-1446.html