Bug 362351 - [RFE] make fence_xvmd not need a cluster for 1-node operation
[RFE] make fence_xvmd not need a cluster for 1-node operation
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: cman (Show other bugs)
5.1
All Linux
low Severity low
: ---
: ---
Assigned To: Lon Hohberger
GFS Bugs
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2007-11-01 14:50 EDT by Lon Hohberger
Modified: 2010-02-19 14:28 EST (History)
2 users (show)

See Also:
Fixed In Version: RHBA-2008-0347
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-05-21 11:58:14 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)
Backported / tested patch from mainline (4.72 KB, patch)
2007-11-01 14:50 EDT, Lon Hohberger
no flags Details | Diff

  None (edit)
Description Lon Hohberger 2007-11-01 14:50:44 EDT
Description of problem:

When a single host (i.e. a single physical machine) is the owner of multiple
guest virtual machines acting as a cluster, it's a waste of resources to require
running openais/cman just to perform fencing.  Furthermore, the checkpoints
fence_xvmd writes are not being replicated anywhere, so the usefulness of
writing them is lost.

The request is to apply this backported patch from mainline/head to the RHEL5
branch for 5.2.

Note: users will have to add 'fence_xvmd -LX' to an init script, such as
rc.local, to enable this mode of operation.
Comment 1 Lon Hohberger 2007-11-01 14:50:44 EDT
Created attachment 245951 [details]
Backported / tested patch from mainline
Comment 2 RHEL Product and Program Management 2007-11-01 14:54:44 EDT
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
release.
Comment 5 Lon Hohberger 2008-03-27 15:21:24 EDT
This is fixed as of 2.0.81.

Run:

  fence_xvmd -LX

Comment 7 errata-xmlrpc 2008-05-21 11:58:14 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.

http://rhn.redhat.com/errata/RHBA-2008-0347.html
Comment 8 Fedora Update System 2008-07-30 16:03:37 EDT
gfs2-utils-2.03.05-1.fc9, rgmanager-2.03.05-1.fc9, cman-2.03.05-1.fc9 has been pushed to the Fedora 9 stable repository.  If problems still persist, please make note of it in this bug report.
Comment 9 Ozgur Akan 2010-02-17 18:26:57 EST
When I run fence_xvmd -LX on more than one Dom0 (each running a DomU each member of a cluster) fencing doesn't work. If I run fence_xvmd -LX only on one Dom0 (doesn't matter on which Dom0) it works pretty well.

I am not sure if this is because of multicast but need find a solution. Could you please advise what I can do.
Comment 10 Lon Hohberger 2010-02-18 14:24:35 EST
Local mode is for single host environments.

If you want to use fence_xvmd in "no-cluster mode" in multi-host environments, you must use different key files - one per host.  Be aware that when using fence_xvmd in this mode that virtual machines must be statically assigned to particular physical hosts and that migrations are not tracked.

Suppose we have 4 VMs and 2 physical hosts called "host1" and "host2".

Create a key file on each host:

  dd if=/dev/urandom of=/etc/cluster/fence_xvm.key

The key files should be stored on each of the virtual machines in /etc/cluster and named accordingly:

  /etc/cluster/fence_xvm-host1.key
  /etc/cluster/fence_xvm-host2.key

Cluster.conf would look something like this:


<cluster>
  <clusternodes>
    <clusternode name="vm1" nodeid="1" votes="1" >
      <fence>
        <method name="1">
          <device name="xvm1" domain="..." />
        </method>
      </fence>
    </clusternode>
    <clusternode name="vm2" nodeid="2" votes="1" >
      <fence>
        <method name="1">
          <device name="xvm2" domain="..." />
        </method>
      </fence>
    </clusternode>
    <clusternode name="vm3" nodeid="3" votes="1" >
      <fence>
        <method name="1">
          <device name="xvm1" domain="..." />
        </method>
      </fence>
    </clusternode>
    <clusternode name="vm4" nodeid="4" votes="1" >
      <fence>
        <method name="1">
          <device name="xvm2" domain="..." />
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <fencedevices>
    <fencedevice name="xvm1" agent="/sbin/fence_xvm" key_file="/etc/cluster/fence_xvm-host1.key" />
    <fencedevice name="xvm2" agent="/sbin/fence_xvm" key_file="/etc/cluster/fence_xvm-host2.key" />
  </fencedevices>
</cluster>

As an alternative to this, you may also consider utilizing the 'fence_virsh' agent instead, which does not require running fence_xvmd at all.  See the fence_virsh man page for more information.
Comment 11 Ozgur Akan 2010-02-19 14:28:47 EST
I guess I have to modify key_file part in cluster.conf manually as Conga doesn't seem to support it. Not a big deal though.

thanks for the response.

Note You need to log in before you can comment on or make changes to this bug.