Bug 672841

Summary: Service will failback on "nofailback" failover domain if service is in "starting" state
Product: Red Hat Enterprise Linux 6 Reporter: Shane Bradley <sbradley>
Component: rgmanagerAssignee: Lon Hohberger <lhh>
Status: CLOSED ERRATA QA Contact: Toure Dunnon <tdunnon>
Severity: medium Docs Contact:
Priority: low    
Version: 6.0CC: cluster-maint, edamato, tdunnon
Target Milestone: rc   
Target Release: ---   
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: rgmanager-3.0.12-11.el6 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 669440 Environment:
Last Closed: 2011-05-19 14:18:35 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 669440    
Bug Blocks:    

Description Shane Bradley 2011-01-26 15:14:17 UTC
+++ This bug was initially created as a clone of Bug #669440 +++

Created attachment 473374 [details]
Patch that will fix issue with failback service if in "starting" state

Description of problem:
If a failover domain has "nofailback" enabled a service that is "starting" state would still failback to the original.  If a service takes a while to come up and is in the "starting" state and the original node comes back online while it is in that state the service will be failed back.

There is patch attached that resolves this issue.

Version-Release number of selected component (if applicable):
rgmanager-2.0.52-9.el5

How reproducible:
Everytime

Steps to Reproduce:
1.  Start a service on NodeA which is failover domain X that has "nofailback, ordered" enabled. In order to recreate the service needs to spin for a few seconds so that is in "starting" for x seconds.
2.  stop rgmanager on NodeA
3. Watch logs on NodeB and start rgmanager on NodeA once NodeB starts the failed over service.
  
Actual results:
Service will failback

Expected results:
Service should not failback

Additional info:

--- Additional comment from lhh on 2011-01-26 10:11:50 EST ---

http://git.fedorahosted.org/git/?p=cluster.git;a=commit;h=9a4c263b0367fe2d80127e17b94c049e9ee2c59e

Posted to STABLE31

Comment 1 Lon Hohberger 2011-01-31 18:48:56 UTC
Testing:

1. Create a two or more node cluster
2. Create a service with a script resource that sleeps for a long time:

#!/bin/sh

if [ "$1" = "start" ]; then
    sleep 300
fi

exit 0

3. Configure a failover domain with an ordered failover domain with nofailback set:
   <failoverdomain name="node1" ordered="1" nofailback="1">
     <failoverdomainnode name="node1" priority="1" />
     <failoverdomainnode name="node2" priority="2" />
   </failoverdomain>

4. Start cman and rgmanager on node2
5. Watch clustat.  While the service is in the 'starting' state, start cman and rgmanager on node1.

Before the patch:
* Once the service reaches the 'started' state, it will relocate to node1 (incorrect)

After the patch:
* The service will remain on node2 (correct)

Comment 6 errata-xmlrpc 2011-05-19 14:18:35 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2011-0750.html