+++ This bug was initially created as a clone of Bug #315631 +++
Description of problem:
Problems occur when attemping the 'restart cluster' operation. From what I've
seen, different things will happen depending on what state the cluster is in
before the restart is executed.
Scenario 1: Start with the cluster stopped
Then, after attempting the 'restart' with conga, the cluster will usually start
properly on all nodes, sometimes it will fail to start the service on one of the
nodes in the cluster.
Scenario 2: Start with the cluster started but no clvmd or rgmanager
Then, after attempting the 'restart' with conga, the cluster will most likey end
up with all nodes in the stopped state and appears to not even have tried the
start operation, but sometimes it will start on a subset of the cluster.
Scenario 3: Start with the cluster started and all services running
Then, after attempting the 'restart' with conga, the cluster will either end up
completely stopped with out a start attempted, end up in some hung loop due to
timing issues, or the restart will appear to work on just a subset of the nodes.
I've tried these cmds manually countless times and never seen issues, that is:
for i in rgmanager clvmd cman
service $i stop
for i in cman clvmd rgmaneger
service $i start
Version-Release number of selected component (if applicable):
-- Additional comment from firstname.lastname@example.org on 2007-10-02 16:38 EST --
Problems occur because some nodes may be starting while others are still in the
process of stopping. According to sdake, this ought to work (it definitely
doesn't consistently work presently), but we can easily work around it in conga.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.