Bug 1174801 - Parallelize cluster start and cluster stop
Summary: Parallelize cluster start and cluster stop
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: pcs
Version: 6.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Tomas Jelinek
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-12-16 14:09 UTC by Tomas Jelinek
Modified: 2015-07-22 06:15 UTC (History)
4 users (show)

Fixed In Version: pcs-0.9.138-1.el6
Doc Type: Bug Fix
Doc Text:
* Previously, pcs stopped cluster nodes sequentially one at a time, which caused the cluster resources to be moved from one node to another pointlessly. Consequently, the stop operation took a long time to finish. Also, losing the quorum during the process could result in node fencing. With this update, pcs stops the nodes simultaneously, preventing the resources from being moved around pointlessly and speeding up the stop operation. In addition, pcs prints a warning if stopping the nodes would cause the cluster to lose the quorum. To stop the nodes in this situation, the user is required to add the "--force" option. (BZ#1174801, BZ#1184763)
Clone Of:
Environment:
Last Closed: 2015-07-22 06:15:58 UTC


Attachments (Terms of Use)
proposed fix (6.51 KB, patch)
2014-12-16 14:13 UTC, Tomas Jelinek
no flags Details | Diff


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:1446 normal SHIPPED_LIVE pcs bug fix and enhancement update 2015-07-20 18:43:57 UTC

Description Tomas Jelinek 2014-12-16 14:09:05 UTC
Description of problem:
Pcs starts and stops cluster nodes sequentially. This leads to slow cluster start as starting node waits for the rest of the nodes, gives up, then another node is started and so on. Similarly when stopping a cluster sequentially pacemaker moves resources from node to node as the nodes are being stopped.


Version-Release number of selected component (if applicable):
pcs-0.9.123-9.el6


How reproducible:
Always


Steps to Reproduce:
1. pcs cluster setup --name cluster66 rh66-node1 rh66-node2 --start
2. pcs cluster stop --all
3. pcs cluster start --all


Actual results:
[root@rh66-node1:~]# time pcs cluster setup --name cluster66 rh66-node1 rh66-node2 --start
rh66-node1: Updated cluster.conf...
rh66-node2: Updated cluster.conf...
Starting cluster on nodes: rh66-node1, rh66-node2...
rh66-node1: Starting Cluster...
rh66-node2: Starting Cluster...

real    1m8.530s
user    0m0.478s
sys     0m0.118s
[root@rh66-node1:~]# pcs cluster stop --all
rh66-node1: Stopping Cluster...
rh66-node2: Stopping Cluster...
[root@rh66-node1:~]# time pcs cluster start --all
rh66-node1: Starting Cluster...
rh66-node2: Starting Cluster...

real    1m7.936s
user    0m0.081s
sys     0m0.015s


Expected results:
when running simultaneously on both nodes:
[root@rh66-node1:~]# time pcs cluster start
Starting Cluster...

real    0m10.702s
user    0m0.192s
sys     0m0.227s

Comment 1 Tomas Jelinek 2014-12-16 14:13:17 UTC
Created attachment 969582 [details]
proposed fix

Test:

[root@rh66-node1:~]# time pcs cluster setup --name cluster66 rh66-node1 rh66-node2 --start
rh66-node1: Updated cluster.conf...
rh66-node2: Updated cluster.conf...
Starting cluster on nodes: rh66-node1, rh66-node2...
rh66-node2: Starting Cluster...
rh66-node1: Starting Cluster...

real    0m11.826s
user    0m0.511s
sys     0m0.137s
[root@rh66-node1:~]# pcs cluster stop --all
rh66-node2: Stopping Cluster...
rh66-node1: Stopping Cluster...
[root@rh66-node1:~]# time pcs cluster start --all
rh66-node2: Starting Cluster...
rh66-node1: Starting Cluster...

real    0m11.084s
user    0m0.143s
sys     0m0.016s

Comment 4 Tomas Jelinek 2015-01-27 14:07:24 UTC
Before Fix:
[root@rh66-node1 ~]# rpm -q pcs
pcs-0.9.123-9.el6.x86_64
[root@rh66-node1:~]# time pcs cluster setup --start --name myCluster rh66-node1 rh66-node2 rh66-node3
rh66-node1: Updated cluster.conf...
rh66-node2: Updated cluster.conf...
rh66-node3: Updated cluster.conf...
Starting cluster on nodes: rh66-node1, rh66-node2, rh66-node3...
rh66-node1: Starting Cluster...
rh66-node2: Starting Cluster...
rh66-node3: Starting Cluster...

real    0m33.446s
user    0m0.556s
sys     0m0.133s
[root@rh66-node1:~]# time pcs cluster stop --all
rh66-node1: Stopping Cluster...
rh66-node2: Stopping Cluster...
rh66-node3: Stopping Cluster...

real    0m10.843s
user    0m0.098s
sys     0m0.011s
[root@rh66-node1:~]# time pcs cluster start --all
rh66-node1: Starting Cluster...
rh66-node2: Starting Cluster...
rh66-node3: Starting Cluster...

real    0m32.688s
user    0m0.095s
sys     0m0.013s



After Fix:
[root@rh66-node1:~]# rpm -q pcs
pcs-0.9.138-1.el6.x86_64
[root@rh66-node1:~]# time pcs cluster setup --start --name myCluster rh66-node1 rh66-node2 rh66-node3
rh66-node1: Updated cluster.conf...
rh66-node2: Updated cluster.conf...
rh66-node3: Updated cluster.conf...
Starting cluster on nodes: rh66-node1, rh66-node2, rh66-node3...
rh66-node1: Starting Cluster...
rh66-node3: Starting Cluster...
rh66-node2: Starting Cluster...

real    0m13.460s
user    0m0.696s
sys     0m0.170s
root@rh66-node1:~]# time pcs cluster stop --all
rh66-node2: Stopping Cluster (pacemaker)...
rh66-node1: Stopping Cluster (pacemaker)...
rh66-node3: Stopping Cluster (pacemaker)...
rh66-node3: Stopping Cluster (cman)...
rh66-node2: Stopping Cluster (cman)...
rh66-node1: Stopping Cluster (cman)...

real    0m5.070s
user    0m0.169s
sys     0m0.017s
[root@rh66-node1:~]# time pcs cluster start --all
rh66-node1: Starting Cluster...
rh66-node3: Starting Cluster...
rh66-node2: Starting Cluster...

real    0m12.161s
user    0m0.152s
sys     0m0.024s

Comment 8 errata-xmlrpc 2015-07-22 06:15:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1446.html


Note You need to log in before you can comment on or make changes to this bug.