RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1039119 - ip.sh: monitor_link: add on/off as a valid value
Summary: ip.sh: monitor_link: add on/off as a valid value
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: resource-agents
Version: 6.5
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: rc
: ---
Assignee: David Vossel
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks: 1044057
TreeView+ depends on / blocked
 
Reported: 2013-12-06 17:23 UTC by Jan Pokorný [poki]
Modified: 2018-12-09 17:21 UTC (History)
4 users (show)

Fixed In Version: resource-agents-3.9.5-8.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-14 05:00:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
proposed patch (1.14 KB, patch)
2013-12-06 17:23 UTC, Jan Pokorný [poki]
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2014:1428 0 normal SHIPPED_LIVE resource-agents bug fix and enhancement update 2014-10-14 01:06:18 UTC

Description Jan Pokorný [poki] 2013-12-06 17:23:09 UTC
Created attachment 833689 [details]
proposed patch

Unfortunately it hasn't ever been settled what "boolean" content type
of parameter means.  As other parameters commonly accounts for on/off
combination as well as more settled 0/1 and yes/no, one can suppose
that combination is also valid for monitor_link parameter of ip RA:

* on:
  http://www.redhat.com/archives/linux-cluster/2013-June/msg00003.html
  (while this works, apparently the counterpart "off" does not and would
  betray the user)

* off: no occurrence found in a quick search


The attached patch straightens this.

Comment 2 michal novacek 2014-07-30 12:08:25 UTC
I have verified that it is possible to give ip.sh resource parameter
monitor_link="off" with resource-agents-3.9.5-11.el6.x86_64

----

virt-126# cman_tool version
6.2.0 config 4

virt-126# clustat
Cluster Status for STSRHTS25778 @ Wed Jul 30 14:06:42 2014
Member Status: Quorate

 Member Name           ID   Status
 ------ ----           ---- ------
 virt-019                  1 Online, rgmanager
 virt-020                  2 Online, rgmanager
 virt-026                  3 Online, Local, rgmanager

 Service Name           Owner (Last)         State         
 ------- ----           ----- ------         -----         
 service:le-service     virt-026             started       

virt-126# ccs -h localhost --lsservices
service: name=le-service, autostart=0, recovery=relocate
  ip: ref=10.34.70.161/23
resources: 
  ip: monitor_link=off, sleeptime=10, address=10.34.70.161/23

virt-126# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="4" name="STSRHTS25778">
  <totem token="3000"/>
  <fence_daemon post_join_delay="20"/>
  <clusternodes>
    <clusternode name="virt-019" nodeid="1">
      <fence>
        <method name="fence-virt-019">
          <device domain="virt-019.cluster-qe.lab.eng.brq.redhat.com" name="fence-virt-019"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="virt-020" nodeid="2">
      <fence>
        <method name="fence-virt-020">
          <device domain="virt-020.cluster-qe.lab.eng.brq.redhat.com" name="fence-virt-020"/>
        </method>
      </fence>
    </clusternode>
    <clusternode name="virt-026" nodeid="3">
      <fence>
        <method name="fence-virt-026">
          <device domain="virt-026.cluster-qe.lab.eng.brq.redhat.com" name="fence-virt-026"/>
        </method>
      </fence>
    </clusternode>
  </clusternodes>
  <fencedevices>
    <fencedevice agent="fence_xvm" auth="sha256" hash="sha256" key_file="/etc/cluster/fence_xvm.key" name="fence-virt-019" timeout="5"/>
    <fencedevice agent="fence_xvm" auth="sha256" hash="sha256" key_file="/etc/cluster/fence_xvm.key" name="fence-virt-020" timeout="5"/>
    <fencedevice agent="fence_xvm" auth="sha256" hash="sha256" key_file="/etc/cluster/fence_xvm.key" name="fence-virt-026" timeout="5"/>
  </fencedevices>
  <rm>
    <resources>
      <ip address="10.34.70.161/23" monitor_link="off" sleeptime="10"/>
    </resources>
    <service autostart="0" name="le-service" recovery="relocate">
      <ip ref="10.34.70.161/23"/>
    </service>
  </rm>
</cluster>

Comment 3 errata-xmlrpc 2014-10-14 05:00:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-1428.html


Note You need to log in before you can comment on or make changes to this bug.