Bug 228823 - service permanently at stopping state
service permanently at stopping state
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: clumanager (Show other bugs)
All Linux
medium Severity high
: ---
: ---
Assigned To: Lon Hohberger
Cluster QE
Depends On:
  Show dependency treegraph
Reported: 2007-02-15 07:01 EST by Tomasz Jaszowski
Modified: 2009-04-16 16:22 EDT (History)
2 users (show)

See Also:
Fixed In Version: RHBA-2007-0149
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-05-10 17:20:30 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
Allows disable requests from 'stopping' state (556 bytes, patch)
2007-02-16 08:46 EST, Lon Hohberger
no flags Details | Diff

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2007:0149 normal SHIPPED_LIVE rgmanager bug fix update 2007-05-10 17:16:41 EDT

  None (edit)
Description Tomasz Jaszowski 2007-02-15 07:01:29 EST
Description of problem:
after tests (stopping services, restarting nides, etc) some of cluster services
have status stopping

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
Actual results:
service is in state stopping, we can't make it stop/disable/start. We tried
disabling stopping and starting service, restarting rgmanager - no effect.

Expected results:
we would like to stop/disable this service, or to tell cluster that those
services are really stop, so we could do something more, like start this service

Additional info:
from the system point of view service is stopped - no process, no mounted
resources, etc. Those service can be started only on one node - it's restricted
by failover domain containing only one node, and recovery method -restart
Comment 1 Tomasz Jaszowski 2007-02-15 07:09:46 EST
clustat -x | grep s1
    <group name="s1" state="113" state_str="stopping"  owner="t1"
last_owner="t1" restarts="0" last_transition="1171311768"
last_transition_str="Mon Feb 12 21:22:48 2007"/>

and nothing changed since then
Comment 2 Lon Hohberger 2007-02-15 12:53:27 EST
The effect here (unable to stop) should be easy to fix (which I will).  However,
the cause is more interesting to fix.  Both are bugs anyway.

Do you have any specific reproducible steps to get a service into the 'stopping'
Comment 3 Tomasz Jaszowski 2007-02-16 05:40:25 EST
for now, we do not have any idea how to reproduce this problem (but we will try). 

Do You have any fast solution, how to tell cluster that those services are
really stopped (rebooting all nodes/stopping all cluster software is not an option)
Comment 4 Lon Hohberger 2007-02-16 08:46:33 EST
Created attachment 148193 [details]
Allows disable requests from 'stopping' state

This does not fix the cause, but it should allow a user to disable a service
which was stuck in the 'stopping' state.
Comment 5 Tomasz Jaszowski 2007-02-19 06:22:33 EST
ok, and if i wouldn't like to recompile modules now, is there any way to tell
cluster that those services are stopped? maybe some signal to send to rgmanager,
or writing something to /proc/cluster/ ?
Comment 6 Lon Hohberger 2007-02-19 11:38:54 EST
It's not a kernel (or kernel module) patch; it's a patch against the rgmanager
source RPM.  You can build rgmanager with the patch and do a rolling upgrade
(i.e. upgrade one node at a time, and restart rgmanager).

The first node you should upgrade is 't1', since it's the one that needs to
clear the 'stopping' state.

Alternatively, you can stop all instances of rgmanager (cluster-wide), then
start them all - and that should clear the 'stopping' state.  This is a
sub-optimal solution, of course.
Comment 7 Lon Hohberger 2007-02-19 11:41:43 EST
If you want, I can rebuild 1.9.54 with the patch for you, but it's not a
complete solution (i.e. it doesn't fix the _cause_; it lets you fix the
symptom), so I was trying to avoid the intermediate step.
Comment 8 Tomasz Jaszowski 2007-02-19 11:53:51 EST
I had small time slot just for me, so I decided to restart all instances of
rgmanager. As we thought it helped.

For now it's working, but I'll try to find more useful logs to trace the problem.

If You can rebuild rgmanager, please do it. I'll use it if problem occurs again.  
Comment 9 Lon Hohberger 2007-02-20 13:02:26 EST
Ok.  So, the fix for the symptom will be available in the next update; you will
be able to just do 'clusvcadm -d <service>' and it will clear the state.

I'll have packages shortly.
Comment 13 Red Hat Bugzilla 2007-05-10 17:20:30 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.