Bug 1039119
| Summary: | ip.sh: monitor_link: add on/off as a valid value | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Jan Pokorný [poki] <jpokorny> | ||||
| Component: | resource-agents | Assignee: | David Vossel <dvossel> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Cluster QE <mspqa-list> | ||||
| Severity: | low | Docs Contact: | |||||
| Priority: | low | ||||||
| Version: | 6.5 | CC: | agk, cluster-maint, fdinitto, mnovacek | ||||
| Target Milestone: | rc | Keywords: | EasyFix, Patch | ||||
| Target Release: | --- | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | resource-agents-3.9.5-8.el6 | Doc Type: | Bug Fix | ||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2014-10-14 05:00:07 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1044057 | ||||||
| Attachments: |
|
||||||
|
Description
Jan Pokorný [poki]
2013-12-06 17:23:09 UTC
I have verified that it is possible to give ip.sh resource parameter
monitor_link="off" with resource-agents-3.9.5-11.el6.x86_64
----
virt-126# cman_tool version
6.2.0 config 4
virt-126# clustat
Cluster Status for STSRHTS25778 @ Wed Jul 30 14:06:42 2014
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
virt-019 1 Online, rgmanager
virt-020 2 Online, rgmanager
virt-026 3 Online, Local, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:le-service virt-026 started
virt-126# ccs -h localhost --lsservices
service: name=le-service, autostart=0, recovery=relocate
ip: ref=10.34.70.161/23
resources:
ip: monitor_link=off, sleeptime=10, address=10.34.70.161/23
virt-126# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="4" name="STSRHTS25778">
<totem token="3000"/>
<fence_daemon post_join_delay="20"/>
<clusternodes>
<clusternode name="virt-019" nodeid="1">
<fence>
<method name="fence-virt-019">
<device domain="virt-019.cluster-qe.lab.eng.brq.redhat.com" name="fence-virt-019"/>
</method>
</fence>
</clusternode>
<clusternode name="virt-020" nodeid="2">
<fence>
<method name="fence-virt-020">
<device domain="virt-020.cluster-qe.lab.eng.brq.redhat.com" name="fence-virt-020"/>
</method>
</fence>
</clusternode>
<clusternode name="virt-026" nodeid="3">
<fence>
<method name="fence-virt-026">
<device domain="virt-026.cluster-qe.lab.eng.brq.redhat.com" name="fence-virt-026"/>
</method>
</fence>
</clusternode>
</clusternodes>
<fencedevices>
<fencedevice agent="fence_xvm" auth="sha256" hash="sha256" key_file="/etc/cluster/fence_xvm.key" name="fence-virt-019" timeout="5"/>
<fencedevice agent="fence_xvm" auth="sha256" hash="sha256" key_file="/etc/cluster/fence_xvm.key" name="fence-virt-020" timeout="5"/>
<fencedevice agent="fence_xvm" auth="sha256" hash="sha256" key_file="/etc/cluster/fence_xvm.key" name="fence-virt-026" timeout="5"/>
</fencedevices>
<rm>
<resources>
<ip address="10.34.70.161/23" monitor_link="off" sleeptime="10"/>
</resources>
<service autostart="0" name="le-service" recovery="relocate">
<ip ref="10.34.70.161/23"/>
</service>
</rm>
</cluster>
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1428.html |