From Bugzilla Helper:
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:126.96.36.199) Gecko/20070515 Firefox/188.8.131.52
Description of problem:
If a blade is not present (i.e. removed for maintenance), the fence_bladecenter cannot check the state as it is reported empty. I think it is something simple to fix for those versed in perl. Normally the fence only runs against a blade that is present. If the blade is removed while running, you run into this issue.
My case below. Blade #3 is a good node. Blade #2 was removed. The fence does not work with the blade removed.
system> env -T system:blade
system:blade> power -state
system:blade> env -T system:blade
The target bay is empty.
system:blade> env -T system:blade
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Bring up cluster on two nodes
2. Physically remove blade running the service
3. Fence fails shown in log
The clustered service does not failover to standby node
Clustered service should failover. Fence should detect that fenced node is no longer present in Blade_Center instead of hanging
Got this from James Parsons - RHCS Mailing list..
I believe this is what you want to happen...if state cannot be checked, fenced keeps trying. How could you determine it was safe to stop without persisting some value like the number of fence tries, and trying to reason out whether it was safe to stop? This will not happen if you remove the blade from the cluster before physically removing it. It is a snap to do this with one of the UIs, if you are not prejudiced against UIs :).
Also, removing the node from cluster membership before jerking it out of the rack tells rgmanager to move any services off of it - rather than having to depend on heartbeat failure to make this happen.
That said, if the blade catches fire and a cage IT guy notices and jerks it quick, (using his IT Oven Mitt, of course) it is silly for fenced to keep incessantly trying when the thing no longer even exists. Perhaps the correct solution would be to have the fence_bladecenter report success if the bladecenter admin unit reports that 'no status is available' for a particular blade - obviously if the thing is not there, it should be safe to say it is fenced :)
If this addresses your situation (I think it does), now would be a REALLY good time to file a ticket requesting this behavior - like today! I'll post a fixed version to the ticket when it is ready.
Thanks to Lon for discussing this with me...;)
Created attachment 161641 [details]
Modified version of fence_bladecenter
I ran into the same problem, and modified the fence_bladecenter script so that:
- Turning a blade off will report success if the blade is absent
- Rebooting a blade will report success if the blade is absent
Cluster Suite seems to only use the "reboot"-command. The modified script is
Patch is in upstream, now:
~~ Attention Customers and Partners - RHEL 5.5 Beta is now available on RHN ~~
RHEL 5.5 Beta has been released! There should be a fix present in this
release that addresses your request. Please test and report back results
here, by March 3rd 2010 (2010-03-03) or sooner.
Upon successful verification of this request, post your results and update
the Verified field in Bugzilla with the appropriate value.
If you encounter any issues while testing, please describe them and set
this bug into NEED_INFO. If you encounter new defects or have additional
patch(es) to request for inclusion, please clone this bug per each request
and escalate through your support representative.
The previous fix worked for me 2007-08-16 from Cato. We are unable to run any BETA versions of software.
Yet again, manpage for fence_agent was not properly updated, so this change is undocumented feature.
(In reply to comment #7)
> Yet again, manpage for fence_agent was not properly updated, so this change is
> undocumented feature.
Jaroslav, can we mark this bug as VERIFIED and file a new bug against the man page issue? I don't want to hold up RHEL5.5 release due to a man page deficiency but do want to track it so that we get it fixed for the next release.
Moving manpages request to bug 573990 and marking this as verified.
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.