Hide Forgot
Description of problem: Testing a cluster before moving it into production, we pulled the fiber path to the SAN. The postgresql service on the node failed, but did not start automatically on the other node (even with the service set to relocate). Fencing the "failed" node, and trying to do a disable/enable on the working node started the service, but not in the way that was expected Version-Release number of selected component (if applicable): -> rpm -q rgmanager cman rgmanager-2.0.52-9.el5 cman-2.0.115-68.el5_6.1 How reproducible: Always Steps to Reproduce: 1. Cause service on node1 to fail (I'm not sure if <crit> #12: RG service:servicename failed to stop; intervention required failure step is part of the problem or not) 2. Fence node1 3. On node2, clusvcadm -d servicename && clusvcadm -e servicename Actual results: [09:25:44][root@lnxdb02][/root]-> clusvcadm -d PostGres && clusvcadm -e PostGres Local machine disabling service:PostGres...Yes [09:26:14][root@lnxdb02][/root]-> clusvcadm -e PostGres Local machine trying to enable service:PostGres...Success service:PostGres is now running on lnxdb02.shsu.edu [09:26:28][root@lnxdb02][/root]-> Expected results: [09:28:21][root@lnxdb02][/root]-> clusvcadm -d PostGres && clusvcadm -e PostGres Local machine disabling service:PostGres...Success Local machine trying to enable service:PostGres...Success service:PostGres is now running on lnxdb02.shsu.edu [09:28:48][root@lnxdb02][/root]-> Additional info: Note the different outputs for the disable command - that's why I'm submitting the bug. It looks like everything else is acting correctly (requiring intervention to start the service, which makes sense based on the error - although I'm not sure why intervention is required).
It looks like it has to do with the return codes when a service goes from "failed" -> "disabled" vs. when it goes from "started" -> "disabled".
I'd like to fix this in Red Hat Enterprise Linux 6 since it's mostly a nuisance and not causing any service outages.
+1 - Although there are circumstances when I actually get a result similar to "Local machine disabling service:<servicename>...Yes" (notice the ...Yes instead of ...Success) after which I cannot restart the resource. I have to run -d twice in a row, getting two differing results, before re-enabling the service Local machine disabling service:<servicename>...Yes Local machine disabling service:<servicename>...Success
So, the fix does the following... 1) Fail to start (and fail to stop) during enable requests will result in 'Aborted; service failed' 2) Fail to stop during relocation requests will result in 'Aborted; service failed' 3) Fail to stop during 'stop' requests or 'disable' requests will result in 'Failure' 4) Fail to stop during 'disable' requests when the service is in the failed or disabled states will result in 'Warning; see system logs' 5) No amount of disable or stop requests will clear last-known owner, making this consistent (previously, disable-after-disable would erroneously clear previous owner)
(1 & 2 should be how rgmanager worked previously; these didn't change)
Upstream commit: http://git.fedorahosted.org/git?p=cluster.git;a=commit;h=2258ab6fb3013e346ab1a675ab81d07e0fa4cb81
*** Bug 727023 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2011-1595.html
*** Bug 857591 has been marked as a duplicate of this bug. ***