Bug 333181 - Add option to not fail-back service
Add option to not fail-back service
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: conga (Show other bugs)
All Linux
low Severity low
: ---
: ---
Assigned To: Ryan McCabe
Cluster QE
: Documentation
Depends On:
  Show dependency treegraph
Reported: 2007-10-15 17:44 EDT by Anthony Green
Modified: 2010-10-22 15:30 EDT (History)
3 users (show)

See Also:
Fixed In Version: RHBA-2008-0798
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-07-25 15:14:18 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description Anthony Green 2007-10-15 17:44:09 EDT
Description of problem:
We use the priority option with our active/active cluster to insure that all of
our databases don’t start on the same node. What we’ve noticed is that when a
node is fenced and then returns to the cluster the databases are automatically
redistributed based on the priority in the cluster.conf.

The issue with this behavior is that if a node crashes, comes back up, crashes
again and comes back up again the database will be needlessly failed over
several times until someone logs into the cluster and stops it.

We request an option to have the databases stay put when a failed node returns
to the cluster.    

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
Actual results:

Expected results:

Additional info:
Comment 2 Lon Hohberger 2007-10-16 15:21:11 EDT
This can be manually configured currently by specifying nofailback="1" as part
of the failover domain configuration in cluster.conf, for example:

                        <failoverdomain name="all" ordered="1" nofailback="1">
                                <failoverdomainnode name="molly"
                                <failoverdomainnode name="frederick" 

Note that this option does not work when dealing with a service bound to a
domain but currently running outside of its failover domain - for example, if a
service was running on a node named "buster", it would currently move to either
"molly" or "frederick" if one of those nodes came online irrespective of the
nofailback option.

The nofailback option is available for RHEL4 and RHEL5
Comment 3 Anthony Green 2007-10-16 15:52:22 EDT
(In reply to comment #2)
> The nofailback option is available for RHEL4 and RHEL5

Thanks.  Just to be clear -- this feature exists but has been undocumented to
date.  I assume we will updating the documentation to cover this feature, correct?

Comment 4 Rob Kenna 2007-10-17 18:00:18 EDT
re: comment #3.  Yes, we will be adding this to the documentation.

- Rob
Comment 7 Ryan McCabe 2008-01-18 13:53:46 EST
Fix will be backported from RHEL5.
Comment 8 Lon Hohberger 2008-03-04 17:46:27 EST
Removing dependency from 182423.
Comment 11 Kiersten (Kerri) Anderson 2008-04-18 16:47:21 EDT
Adding the feature described in bug #333181 introduced a regression that causes
an exception to be thrown when updating or adding a failover domain. This makes
it impossible to create or update failover domains via conga.
Comment 13 Brian Brock 2008-05-23 14:47:07 EDT
fix verified in 0.11.1-4.el4, GUI option in the Failover Domain subsection of
the cluster tab creates the cluster.conf option described above.
Comment 14 Paul Kennedy 2008-06-10 18:19:08 EDT
This needs to be documented in RHEL 4 Cluster_Administration. Likewise for RHEL
5 Cluster_Administration. Because the RHEL 5 bug for this is closed, I have
opened a doc bug for it: bug #450777. Will port change to RHEL 4 doc.
Comment 16 errata-xmlrpc 2008-07-25 15:14:18 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.