Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
+++ This bug was initially created as a clone of Bug #669440 +++
Created attachment 473374[details]
Patch that will fix issue with failback service if in "starting" state
Description of problem:
If a failover domain has "nofailback" enabled a service that is "starting" state would still failback to the original. If a service takes a while to come up and is in the "starting" state and the original node comes back online while it is in that state the service will be failed back.
There is patch attached that resolves this issue.
Version-Release number of selected component (if applicable):
rgmanager-2.0.52-9.el5
How reproducible:
Everytime
Steps to Reproduce:
1. Start a service on NodeA which is failover domain X that has "nofailback, ordered" enabled. In order to recreate the service needs to spin for a few seconds so that is in "starting" for x seconds.
2. stop rgmanager on NodeA
3. Watch logs on NodeB and start rgmanager on NodeA once NodeB starts the failed over service.
Actual results:
Service will failback
Expected results:
Service should not failback
Additional info:
--- Additional comment from lhh on 2011-01-26 10:11:50 EST ---
http://git.fedorahosted.org/git/?p=cluster.git;a=commit;h=9a4c263b0367fe2d80127e17b94c049e9ee2c59e
Posted to STABLE31
Testing:
1. Create a two or more node cluster
2. Create a service with a script resource that sleeps for a long time:
#!/bin/sh
if [ "$1" = "start" ]; then
sleep 300
fi
exit 0
3. Configure a failover domain with an ordered failover domain with nofailback set:
<failoverdomain name="node1" ordered="1" nofailback="1">
<failoverdomainnode name="node1" priority="1" />
<failoverdomainnode name="node2" priority="2" />
</failoverdomain>
4. Start cman and rgmanager on node2
5. Watch clustat. While the service is in the 'starting' state, start cman and rgmanager on node1.
Before the patch:
* Once the service reaches the 'started' state, it will relocate to node1 (incorrect)
After the patch:
* The service will remain on node2 (correct)
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.
http://rhn.redhat.com/errata/RHBA-2011-0750.html