Bug 229650 - Restart only the failed resource and its dependencies instead of the whole service.
Summary: Restart only the failed resource and its dependencies instead of the whole se...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: rgmanager
Version: 5.0
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
: ---
Assignee: Lon Hohberger
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 239594
TreeView+ depends on / blocked
 
Reported: 2007-02-22 14:40 UTC by Simone Gotti
Modified: 2009-04-16 22:36 UTC (History)
1 user (show)

Fixed In Version: RHBA-2007-0580
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2007-11-07 16:45:42 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Patch that will not handle the special flag discussed in comments #1 and #2. (5.48 KB, patch)
2007-02-23 10:06 UTC, Simone Gotti
no flags Details | Diff
rhel5 patch (13.34 KB, patch)
2007-05-24 19:30 UTC, Lon Hohberger
no flags Details | Diff
Example script which causes a failed status check (114 bytes, text/plain)
2007-05-24 19:32 UTC, Lon Hohberger
no flags Details
Example script which causes a successful status check (92 bytes, text/plain)
2007-05-24 19:32 UTC, Lon Hohberger
no flags Details
Example config 1 (351 bytes, text/plain)
2007-05-24 19:33 UTC, Lon Hohberger
no flags Details
Example config 2 (351 bytes, text/plain)
2007-05-24 19:35 UTC, Lon Hohberger
no flags Details
Updated patch; fixes corner case (13.60 KB, patch)
2007-05-31 18:57 UTC, Lon Hohberger
no flags Details | Diff
ancillary patch (1.60 KB, patch)
2007-06-13 21:47 UTC, Lon Hohberger
no flags Details | Diff
Fix restart bug (2.47 KB, patch)
2007-08-30 16:00 UTC, Lon Hohberger
no flags Details | Diff
Restart only independent subtrees if a non independent child fails. (1.11 KB, patch)
2007-09-25 14:55 UTC, Simone Gotti
no flags Details | Diff
Updated patch (1.11 KB, patch)
2007-09-25 18:24 UTC, Lon Hohberger
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2007:0580 0 normal SHIPPED_LIVE rgmanager bug fix and enhancement update 2007-10-30 15:37:24 UTC

Description Simone Gotti 2007-02-22 14:40:12 UTC
It should be a good idea to restart only the failed resource and its dependency
instead of the whole service.

For example:

<service>
	<oracle/>
	<ip>
		<script/>
	</ip>
</service>

if ip fails: stop script, stop ip, start ip, start script, without restarting
also oracle. 

This should, probably, require the ability to disable the implied dependency
ordering (I'm not sure about this).

Another question is: If a resource has in it's agent the attributes maxinstances
> 1, it can appear different times in the same service or in different services,
so if it has to be stopped all the dependency needs to be calculated (but I
think this can be reported in another bug report as now looks like it's avoided
by not stopping the resource like happen in clusterfs).

Comment 1 Lon Hohberger 2007-02-22 15:05:30 UTC
Ok, this will require additional configuration information, because currently,
resources have an "all children alive" requirement.  That is, if any parent
resource has a child which is not in proper operational status, the parent is
considered broken as well.

This, of course, makes status checks easy: the service is broken if anything in
the service is broken

What we need is a way to have rgmanager ignore errors if they're immediately
correctable on a per-resource basis.  This is like the "recovery" flag -
however, the "recovery" operation is not allowed to affect *any* other resources
 - even if an explicit parent/child or other dependency exists.  So, "recovery"
will not solve the problem work if a resource has children.

So, what we basically need is a special attribute which can be assigned to
any/all resources which says (basically):

"This subtree is not dependent on its siblings, and can be safely restarted
without the parent being considered broken".

So, to expand on your example:

<service>
	<oracle />
	<ip special_new_attr="1">
		<script/>
	</ip>
</service>

If the IP address winds up missing, the script is stopped and the IP is stopped,
then restarted.  If oracle winds up broken, *everything* is restarted.  To make
them completely independent:

<service>
	<oracle special_new_attr="1"/>
	<ip special_new_attr="1">
		<script/>
	</ip>
</service>

This would work at all levels, too:

<service>
        <fs special_new_attr="1">
                <nfsexport>
                        <nfsclient/>
                </nfsexport>
        </fs>
        <fs special_new_attr="1">
                <nfsexport>
                        <nfsclient/>
                </nfsexport>
        </fs>
        <ip/>
</service>

The above example is an nfs service.  The two file system trees can be restarted
independently, but if the IP fails, everything must be restarted (Note: This
might not work for NFS services due to lock reclaim broadcasts, so is just there
for illustrative purposes).

As for maxinstances, instances of shared resources are expected to be able to
operate independently.  That is, if one instance fails, it does not imply that
they all have failed.  If it does, something is broken in the resource agent
and/or rgmanager.  If it isn't possible to make the resource instances
completely independent of one-another, then the resource agent should not define
maxinstances > 1.

Comment 2 Lon Hohberger 2007-02-22 15:31:03 UTC
<service>
        <fs name="one" special_new_attr="1">
                <nfsexport>
                        <nfsclient/>
                </nfsexport>
        </fs>
        <fs name="two" special_new_attr="1">
                <nfsexport>
                        <nfsclient/>
                </nfsexport>
                <script name="scooby"/>
        </fs>
        <ip/>
</service>

In this example, we add a script fs resource named "two".  If "two" fails, all
of its children must be restarted.  That is, the nfsxeport (and client) and the
script named "scooby" are all restarted.  Similarly, adhering to current
rgmanager behavior, if any of "two"'s children fail, everything up to and
including "two" will be restarted.  For example, if "scooby" fails, the
nfsexport/nfsclient children of "two" will also be restarted - and so will "two"
itself.

However, the file system "one" will never be affected by any of "two"'s problems.

Comment 3 Simone Gotti 2007-02-23 10:06:21 UTC
Created attachment 148657 [details]
Patch that will not handle the special flag discussed in comments #1 and #2.

Comment 4 Simone Gotti 2007-02-23 10:07:26 UTC
Hi,

as requested on IRC I attacched an initial patch against CVS HEAD done before
considerations in comments #1 and #2. This patch will ALWAYS stop resources
until the failed one, and, then starting from the failed one.

When the status on a resource fails the flags RF_NEEDSTOP and RF_NEEDSTART of
that node are setted.
Two new functions svc_condstart and svc_condstop are added.

The call to svc_stop was moved inside handle_recover_req, sp it will first check
on the recovery policies and, if needed, calls svc_condstop insted of svc_stop.

Thanks!

Comment 5 Lon Hohberger 2007-02-23 19:11:58 UTC
Thanks Simone - I won't be able to look at this in detail for about a week or
so.  Sorry (in advance) for the delay!

Comment 6 Kiersten (Kerri) Anderson 2007-04-23 17:24:07 UTC
Fixing Product Name.  Cluster Suite was integrated into the Enterprise Linux for
version 5.0.

Comment 8 Lon Hohberger 2007-05-24 19:30:53 UTC
Created attachment 155385 [details]
rhel5 patch

This is a simplified patch which allows independent subtrees to be restarted as
part of a status check.

Comment 9 Lon Hohberger 2007-05-24 19:32:00 UTC
Created attachment 155386 [details]
Example script which causes a failed status check

Comment 10 Lon Hohberger 2007-05-24 19:32:40 UTC
Created attachment 155387 [details]
Example script which causes a successful status check

Comment 11 Lon Hohberger 2007-05-24 19:33:39 UTC
Created attachment 155388 [details]
Example config 1

Standard behavior.  Status fails w/o __independent_subtree flag, so the whole
service is noted as failed.

Comment 12 Lon Hohberger 2007-05-24 19:35:51 UTC
Created attachment 155389 [details]
Example config 2

Example config #2:  the script tag referencing the 'status-fail' script has
__independent_subtree="1".  This causes the script to be restarted inline as
part of the status check.  If the restart of the independent tree(s) fail, the
service is failed, and normal recovery takes place.

This makes __independent_subtree a bit like the "recover" action for resources,
except that children of a node marked __indepdent_subtree are also restarted
(the parent->child dependency relationship is **NOT** affected by the
__independent_subtree flag)

Comment 13 Lon Hohberger 2007-05-24 19:37:07 UTC
[lhh@ayanami daemons]$ ./rg_test test ./cluster.conf-example1 status service test00
malloc_init: Warning: using unlocked memory pages (got root?)
Running in test mode.
Checking status of test00...
<info>   Executing /tmp/status-fail status
[script] /tmp/status-fail status
<err>    script:foo: status of /tmp/status-fail failed (returned 1)
Status check of test00 failed

(if this were rgmanager, service recovery would commence)


[lhh@ayanami daemons]$ ./rg_test test ./cluster.conf-example2 status service test00
malloc_init: Warning: using unlocked memory pages (got root?)
Running in test mode.
Checking status of test00...
<info>   Executing /tmp/status-fail status
[script] /tmp/status-fail status
<err>    script:foo: status of /tmp/status-fail failed (returned 1)
<info>   Executing /tmp/status-success status
Node script:foo - CONDSTOP
<info>   Executing /tmp/status-fail stop
[script] /tmp/status-fail stop
Node script:foo - CONDSTART
<info>   Executing /tmp/status-fail start
[script] /tmp/status-fail start
Status of test00 is good

... note inline recovery.

Comment 14 Lon Hohberger 2007-05-24 19:55:52 UTC
Example log messages:

May 24 15:55:10 ayanami rg_test: [13666]: <err> script:foo: status of
/tmp/status-fail failed (returned 1) 
May 24 15:55:10 ayanami rg_test[13666]: <notice> status on script "foo" returned
1 (generic error) 
May 24 15:55:10 ayanami rg_test[13666]: <warning> Some independent resources in
service:test00 failed; Attempting inline recovery 
May 24 15:55:10 ayanami rg_test[13666]: <notice> Inline recovery of
service:test00 successful 

Comment 15 Lon Hohberger 2007-05-31 18:57:45 UTC
Created attachment 155835 [details]
Updated patch; fixes corner case

This updated patch fixes a corner case where if a child of an independent
subtree node failed, the whole service was restarted.  Now just the independent
subtree is restarted.

Comment 16 Lon Hohberger 2007-06-13 21:47:53 UTC
Created attachment 156923 [details]
ancillary patch

Patches in CVS

Comment 18 Lon Hohberger 2007-08-30 15:50:20 UTC
this doesn't work with non-independent subtrees sometimes.

I am testing a patch.

Comment 19 Lon Hohberger 2007-08-30 16:00:21 UTC
Created attachment 181341 [details]
Fix restart bug

Comment 20 Lon Hohberger 2007-08-30 16:37:18 UTC
I unit tested the patch with:

  <script name="logger" file="/log-me.sh"/>

And a service like this:

  <service autostart="1" exclusive="0" name="badsvc" recovery="restart">
    <script file="/test-script.sh" name="false" __independent_subtree="0">
      <script ref="logger"/>
    </script>
    <script ref="logger"/>

Contents of /log-me.sh:

#!/bin/sh
. /usr/share/cluster/ocf-shellfuncs
ocf_log notice "$0 $1"
exit 0

Contents of /test-script.sh:

#!/bin/sh
[ -f "/tmp/test-script-$1" ] && exit 1
exit 0

How it works:

(1) When you create /tmp/test-script-status, the script will fail that phase.
(2) If you have __independent_subtree="1" in cluster.conf for the above service
you will only see log-me.sh stop/start once (the child of the test-script is
restarted).
(3) If you have __independent_subtree="0" in cluster.conf for the above service,
you will see log-me.sh stop/start twice, because the whole service is considered
failed.

Unit-tested against migratory services (a la VMs) in order to ensure that
non-cluster-induced migration was still detected correctly; it was.  After
migrating (manually, using 'xm migrate') guest1 to et-virt05 (node 1 in
cluster.conf):

Aug 30 12:27:42 et-virt06 clurgmgrd[5112]: <notice> status on vm "guest1"
returned 1 (generic error) 
Aug 30 12:27:42 et-virt06 clurgmgrd[5112]: <notice> Migration: vm:guest1 is
running on 1 



Comment 21 Lon Hohberger 2007-08-30 16:39:08 UTC
More-preferred migration still works as expected, though an erroneous status
error showed up in the logs:

Aug 30 12:28:27 et-virt06 clurgmgrd[5112]: <info> State change:
et-virt07.lab.boston.redhat.com UP 
Aug 30 12:28:28 et-virt06 clurgmgrd[5112]: <notice> Migrating vm:guest3 to
better node et-virt07.lab.boston.redhat.com 
Aug 30 12:28:28 et-virt06 clurgmgrd[5112]: <notice> Migrating vm:guest3 to
et-virt07.lab.boston.redhat.com 
Aug 30 12:28:34 et-virt06 clurgmgrd[5112]: <notice> Migration of vm:guest3 to
et-virt07.lab.boston.redhat.com completed 
vvvvvv
Aug 30 12:28:35 et-virt06 clurgmgrd[5112]: <notice> status on vm "guest3"
returned 1 (generic error) 
^^^^^^

This is considered a separate, non-critical bug.


Comment 23 Simone Gotti 2007-09-25 14:54:16 UTC
I noticed, using the RHEL51 branch that with a config like this:

<service autostart="1" domain="argle" name="test00">
    <script name="foo01" __independent_subtree="1" file="/tmp/script01">
        <script name="foo02" __independent_subtree="0" file="/tmp/script02"/>
    <script name="foo03" __independent_subtree="0" file="/tmp/script03"/>
    </script>
</service>

if foo02 fails then the whole service is restarted instead of only restarting
from foo01.

I did a little patch that will try to fix this problem. I hope the logic I
followed is correct.

Thanks!


Comment 24 Simone Gotti 2007-09-25 14:55:36 UTC
Created attachment 205561 [details]
Restart only independent subtrees if a non independent child fails.

Comment 25 Lon Hohberger 2007-09-25 15:20:44 UTC
Simone, your patch is correct.

Comment 26 Lon Hohberger 2007-09-25 15:41:23 UTC
I think the return SFL_RECOVERABLE should be rv = SFL_RECOVERABLE



Comment 27 Lon Hohberger 2007-09-25 18:24:41 UTC
Created attachment 205861 [details]
Updated patch

Slight change; set rv instead of returning.  Functionally, there's no change;
this is in case we add other stuff to the function later.

Comment 28 Simone Gotti 2007-09-25 21:04:35 UTC
I did some tests with the latest patch (id=205861) using various combinations of
resources and differents __independent_subtree values and it looks ok.

Ex:

<service autostart="1" domain="argle" name="test00">
    <script name="foo01" __independent_subtree="1" file="/tmp/script01">
        <script name="foo02" __independent_subtree="0" file="/tmp/script02"/>
        <script name="foo03" __independent_subtree="1" file="/tmp/script03"/>
    </script>
    <script name="foo04" __independent_subtree="0" file="/tmp/script04"/> 
</service>

fail foo01 : foo03, foo02, foo01 stopped; foo01, foo02, foo03 started;
fail foo02 : foo03, foo02, foo01 stopped; foo01, foo02, foo03 started;
fail foo03 : foo03 stopped; foo03 started;
fail foo04 : whole service restarted

Thanks!


Comment 32 errata-xmlrpc 2007-11-07 16:45:42 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2007-0580.html



Note You need to log in before you can comment on or make changes to this bug.