Bug 480835

Summary: clusvcadm -d outputs 'YES' if script resource fails on stop.
Product: [Retired] Red Hat Cluster Suite Reporter: Issue Tracker <tao>
Component: rgmanagerAssignee: Lon Hohberger <lhh>
Status: CLOSED ERRATA QA Contact: Cluster QE <mspqa-list>
Severity: medium Docs Contact:
Priority: medium    
Version: 4CC: cluster-maint, edamato, tao
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2009-05-18 21:13:16 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Fix none

Description Issue Tracker 2009-01-20 20:07:52 UTC
Escalated to Bugzilla from IssueTracker

Comment 1 Issue Tracker 2009-01-20 20:07:55 UTC
Description of problem:

clusvcadm -d outputs 'YES' if script resource fails on stop.

ex:

exerpt from cluster.conf:
		....
                <resources/>
                <service autostart="0" name="myservice">
                        <script file="/etc/init.d/myscript" name="myscript"/>
                </service>
		....

# cat /etc/init.d/myscript 
case "$1" in
  start)
        exit 0
        ;;
  stop)
        exit 1
        ;;
  status)
        exit 0
        ;;
  *)
        echo $"Usage: $prog {start|stop|status}"
        exit 1
esac

-- service is working properly:

# clustat
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  rh4full.cluster                          Online, Local, rgmanager

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  myservice            rh4full.cluster                started         


-- forcing the stop of the service, clusvcadm outputs Yes:

# clusvcadm -d myservice
Member rh4full.cluster disabling myservice...Yes

-- however the service becomes failed

# clustat
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  rh4full.cluster                          Online, Local, rgmanager

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  myservice            (rh4full.cluster)              failed          


-- if another try is done, the output is correct:

# clusvcadm -d myservice
Member rh4full.cluster disabling myservice...success

# clustat
Member Status: Quorate

  Member Name                              Status
  ------ ----                              ------
  rh4full.cluster                          Online, Local, rgmanager

  Service Name         Owner (Last)                   State         
  ------- ----         ----- ------                   -----         
  myservice            (none)                         disabled        

-- if another try is done, clusvcadm outputs Yes again, then it cycles between Yes and success.

How reproducible:

every time.

Steps to Reproduce:

create a service with a resource, and make this resource fail in its stop clause.

run clusvcadm -d on the service, and see the output being 'YES'.

Actual results:

clusvcadm -d outputs the word YES instead of 'success' or 'failure'

Expected results:

clusvcadm -d should output 'success' or 'failure' not 'YES'.

Additional info:

RHEL4:
cman-1.0.24-1
dlm-1.0.7-1
magma-1.0.8-1
magma-plugins-1.0.14-1
rgmanager-1.9.80-1
This event sent from IssueTracker by edamato  [EMEA Production Escalation]
 issue 258013

Comment 3 Lon Hohberger 2009-03-02 20:57:32 UTC
Created attachment 333787 [details]
Fix

Comment 4 Lon Hohberger 2009-03-12 21:42:34 UTC
moved to rhel4 / rhel48

Comment 7 errata-xmlrpc 2009-05-18 21:13:16 UTC
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2009-1048.html