Red Hat Bugzilla – Bug 614433
cannot configure ipport for fence agents
Last modified: 2016-04-26 09:28:40 EDT
Description of problem:
the basic menu to configure ssh options for fence agents is too poor.
It allows only ssh private key and host name, that in most case might be enough, but bits like port to connect to and user to perform the fenceing, should be available by default.
The ssh config is on a per node base, and that generally works fine, it should be also available as device option.
See this config for example:
<clusternode name="rhel6-node1" nodeid="1" votes="1">
<device name="virsh_fence" port="rhel6-node1"/>
<clusternode name="rhel6-node2" nodeid="2" votes="1">
<device name="virsh_fence" port="rhel6-node2"/>
<fencedevice agent="fence_virsh" identity_file="/root/.ssh/id_rsa" ipaddr="daikengo.int.fabbione.net" ipport="300" login="root" name="virsh_fence" secure="1"/>
I have only virsh to test, so this might not apply to other agents, but I´d prefer to have it documented.
Version-Release number of selected component (if applicable):
luci-0.22.2-7.el6.x86_64 (built from brew)
luci-0.22.2-3 from rhel6 repos
This issue has been proposed when we are only considering blocker
issues in the current Red Hat Enterprise Linux release. It has
been denied for the current Red Hat Enterprise Linux release.
** If you would still like this issue considered for the current
release, ask your support representative to file as a blocker on
your behalf. Otherwise ask that it be considered for the next
Red Hat Enterprise Linux release. **
Are there updated notes on how to configure fence using "fence_virsh" .
(In reply to comment #6)
> Are there updated notes on how to configure fence using "fence_virsh" .
fence_virsh is not supported at all in conjunction with RHEL HA/clustering. fence_virsh was provided only as a standalone fence tool for development (and usage outside of RHEL HA). Please see the support matrix for fencing in the Red Hat knowledge base at:
If you want to build virtual clusters (clusters running inside of guests), the supported fence agent to use is either fence_xvm/fence_xvmd or fence_virt/fence_virtd (depending on whether you are using RHEL5 or RHEL6)
Note that virtualized clusters based on KVM hypervisor are still in TechPreview. We are looking to fully support this configuration in the near future.
As far as inability to set "ipport" parameter for respective fence agents is concerned, this should be fixed in b226245494b2542c299a7a544cabc7cc508b35fd (0077bc7ba9137dc741a4039334e724e0a711012c) commit.
I also added ability to set "udpport" parameter to respective fence agents.
The list of fence agents affected with this commit follows:
Note: There could be also "ipport" parameter setting ability for fence_virt/fence_xvm, but this was omitted in that fix because of not being sure about the meaning of "Channel port" (source: man fence_virt) and whether this is the right label that should be displayed in GUI.
(In reply to comment #8)
Note that fence_ibmblade is only a symlink to fence_bladecenter_snmp. ibmblade is deprecated and replaced by bladecenter_snmp.
(In reply to comment #9)
> Note that fence_ibmblade is only a symlink to fence_bladecenter_snmp. ibmblade
> is deprecated and replaced by bladecenter_snmp.
I have already came across this. In the code, there is used only mentioned variant, and IMHO this is used for RHEL5/RHEL6 compatibility reason. So did not feel a need to do anything about this (especially in connection with this bug). If there should be a strict separation of the name for this fence agent regarding RHEL version, maybe filing a new bug could be considered.
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.