Red Hat Bugzilla – Bug 456403
cluster will recover even if a fence device failed
Last modified: 2009-04-16 19:03:23 EDT
Description of problem:
If one has multiple fence devices in a fence level (necessary e.g. if one has
redundant power supplies), one of the fence devices can fail but the cluster
will still recover and reclaim cluster locks. This is obviously bad since e.g.
GFS locks will be reclaimed without the offending node being power cycled.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. Create a cluster with the attached cluster configuration (never mind the
somewhat unorthodox fencing agents...) Note that the fence device f2 will always
2. run "service network stop" on one of the nodes
The other node will print in it's syslog
Jul 23 14:41:48 red fenced: hat not a cluster member after 0 sec
Jul 23 14:41:48 red fenced: fencing node "hat"
Jul 23 14:41:48 red fenced: fence "hat" failed
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Trying to acquire
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Looking at journal...
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Acquiring the
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Replaying journal...
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Replayed 1 of 1 blocks
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Found 0 revoke tags
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Journal replayed in 1s
Jul 23 14:41:53 red kernel: GFS2: fsid=juran23:sda.0: jid=1: Done
The node "red" has now recovered the cluster and reclaimed GFS2 locks _although
[root@red ~]# cman_tool services
type level name id state
fence 0 default 00010001 none
dlm 1 sda 00040001 none
gfs 2 sda 00030001 none
Fencing retried and cluster operation suspended until fencing succeeds. Which is
what happens if one only have a single fencing device in each level.
The same behavior on RHEL5 with cman-2.0.84-2.el5
Created attachment 312467 [details]
example cluster configuration
This sounds similar to something we fixed a long time ago, will check.
update_cman() is called after the first device succeeds.
Created attachment 312490 [details]
Pass 1 fixing order bug. Untested.
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release. Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products. This request is not yet committed for inclusion in an Update
commit in RHEL5 branch 9567fe17bf33eb0008831551b76c7f46c55ba40b
I've not tested the fix yet since I don't have either a RHEL5 or STABLE2
cluster readily available. If no one else can do a quick test to verify
the patch, I'll get a cluster set up.
I've tested Lon's patch from #4 on my F-9 cluster (cman-2.03.05-1) and it seems
to solve the issue.
Also, I can confirm that the fencing agents are executed in the order they are
mentioned in cluster.conf. Which is good. Since doing power-on followed by
power-off is not quite the same as off followed by on (-:
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on therefore solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.