RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1390915 - false-positive monitoring operation result of fence_ipmilan stonith resource
Summary: false-positive monitoring operation result of fence_ipmilan stonith resource
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: fence-agents
Version: 7.4
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Marek Grac
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1396111 (view as bug list)
Depends On:
Blocks: 1394959 1397888 1397889
TreeView+ depends on / blocked
 
Reported: 2016-11-02 09:05 UTC by Josef Zimek
Modified: 2020-06-11 13:03 UTC (History)
6 users (show)

Fixed In Version: fence-agents-4.0.11-49.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1397888 1397889 (view as bug list)
Environment:
Last Closed: 2017-08-01 16:10:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3667731 0 None None None 2018-10-27 04:12:17 UTC
Red Hat Product Errata RHBA-2017:1874 0 normal SHIPPED_LIVE fence-agents bug fix and enhancement update 2017-08-01 17:53:05 UTC

Description Josef Zimek 2016-11-02 09:05:27 UTC
Description of problem:

When fence_ipmilan stonith resource is configured with unreachable IP address the resource starts and reports successful monitoring operation checks:



# ping -w1 -c5 10.10.10.20
PING 10.10.10.20 (10.10.10.20) 56(84) bytes of data.

--- 10.10.10.20 ping statistics ---
2 packets transmitted, 0 received, 100% packet loss, time 999ms



# pcs stonith create fence_lojza2 fence_ipmilan ipaddr="10.10.10.20" login="login" passwd="password" verbose=true op monitor interval=5 timeout=20


pcs config:

 Resource: fence_lojza2 (class=stonith type=fence_ipmilan)
  Attributes: ipaddr=10.10.10.20 login=login passwd=password verbose=true 
  Operations: monitor interval=5 timeout=20 (fence_lojza2-monitor-interval-5)


pcs staus:


 fence_lojza2	(stonith:fence_ipmilan):	Started virt-018



set debug logging on stonit-ng component

monitoring is successful even if IP 10.10.10.20 is not reachable from my test system:


Oct 28 17:11:31 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_device_execute:	Operation monitor on fence_lojza2 now running with pid=2833, timeout=20s
Oct 28 17:11:31 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_action_async_done:	Child process 2833 performing action 'monitor' exited with rc 0
Oct 28 17:11:31 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: st_child_done:	Operation 'monitor' on 'fence_lojza2' completed with rc=0 (0 remaining)
Oct 28 17:11:31 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: log_operation:	Operation 'monitor' [2833] for device 'fence_lojza2' returned: 0 (OK)
Oct 28 17:11:31 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: log_operation:	fence_lojza2:2833 [ NOTICE: List option is not working on this device yet ]
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_command:	Processing st_execute 14 from lrmd.2536 (               0)
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: schedule_stonith_command:	Scheduling monitor on fence_lojza2 for 91607a7b-8023-4f7c-9b1f-3f4fbbf47fa7 (timeout=20s)
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_command:	Processed st_execute from lrmd.2536: Operation now in progress (-115)
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_action_create:	Initiating action monitor for agent fence_ipmilan (target=(null))
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: internal_stonith_action_execute:	forking
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: internal_stonith_action_execute:	sending args
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_device_execute:	Operation monitor on fence_lojza2 now running with pid=2838, timeout=20s
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_action_async_done:	Child process 2838 performing action 'monitor' exited with rc 0
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: st_child_done:	Operation 'monitor' on 'fence_lojza2' completed with rc=0 (0 remaining)
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: log_operation:	Operation 'monitor' [2838] for device 'fence_lojza2' returned: 0 (OK)
Oct 28 17:11:36 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: log_operation:	fence_lojza2:2838 [ NOTICE: List option is not working on this device yet ]
Oct 28 17:11:41 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_command:	Processing st_execute 15 from lrmd.2536 (               0)
Oct 28 17:11:41 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: schedule_stonith_command:	Scheduling monitor on fence_lojza2 for 91607a7b-8023-4f7c-9b1f-3f4fbbf47fa7 (timeout=20s)
Oct 28 17:11:41 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_command:	Processed st_execute from lrmd.2536: Operation now in progress (-115)
Oct 28 17:11:41 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: stonith_action_create:	Initiating action monitor for agent fence_ipmilan (target=(null))
Oct 28 17:11:41 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: internal_stonith_action_execute:	forking
Oct 28 17:11:41 [2535] virt-018.cluster-qe.lab.eng.brq.redhat.com stonith-ng:    debug: internal_stonith_action_execute:	sending args


Version-Release number of selected component (if applicable):

fence-agents-ipmilan-4.0.11-27.el7_2.7.x86_64


Actual results:

monitoring operation of stonith resource is successful even if configured IP of stonith device is not available

Expected results:


If IP of stonith device is not reachable then monitoring operation should result with failed status

Comment 5 Marek Grac 2016-11-18 10:02:31 UTC
*** Bug 1396111 has been marked as a duplicate of this bug. ***

Comment 9 Marek Grac 2016-11-23 15:18:34 UTC
Test:

fence_ipmilan -o monitor -l ipmi -p ipmi -a ipmi

Before:
NOTICE: List option is not working on this device yet
$? = 0

After:
Failed: Unable to obtain correct plug status or plug is not available
$? = 1

Comment 13 errata-xmlrpc 2017-08-01 16:10:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1874


Note You need to log in before you can comment on or make changes to this bug.