RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1549366 - stonith monitor/start fails when agent outputs more than 64KB to stderr
Summary: stonith monitor/start fails when agent outputs more than 64KB to stderr
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.5
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: rc
: 7.7
Assignee: Klaus Wenninger
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1748811 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-27 01:20 UTC by Ondrej Faměra
Modified: 2021-06-10 14:52 UTC (History)
8 users (show)

Fixed In Version: pacemaker-1.1.20-5.el7
Doc Type: No Doc Update
Doc Text:
This shouldn't affect enough users to document, it's more helpful to GSS when debugging
Clone Of:
Environment:
Last Closed: 2019-08-06 12:53:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
reproducer fencing agent (66.24 KB, application/x-shellscript)
2018-02-27 01:20 UTC, Ondrej Faměra
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3364741 0 None None None 2018-02-27 08:27:24 UTC
Red Hat Knowledge Base (Solution) 3610131 0 Configure None The following error is shown when a stonith device is created: `Error: Agent 'fence_ipmilan'; is not installed or does n... 2019-07-30 13:59:50 UTC
Red Hat Product Errata RHBA-2019:2129 0 None None None 2019-08-06 12:54:08 UTC

Description Ondrej Faměra 2018-02-27 01:20:30 UTC
Created attachment 1401102 [details]
reproducer fencing agent

=== Description of problem
Stonith device fails to start/monitor when the output from fencing agent
on stderr has more than 64KB of data (on single or multiple lines).

=== Version-Release number of selected component (if applicable):
pacemaker-1.1.18-8.el7.x86_64 (7.5 Beta) and paceamker-1.1.16-12.el7_4.7.x86_64 (RHEL 7.4)

=== How reproducible:
always

=== Steps to Reproduce:
1. Configure the fencing agent from attachment of this BZ into pacemaker cluster
  # mv fence_fake /usr/sbin/fence_fake2
  # pcs stonith create test fence_fake2
2. Observe if the fencing agent will start

=== Actual results:
'start' operation hangs and we receive 'Timer expired' message in logs. 
(logs trimmed, note that logs doesn't capture whole output from stderr, but only the start of the lines)

Feb 27 01:53:27 [1630] fastvm-rhel-7-5b-93 stonith-ng:     info: st_child_term: Child 1753 timed out, sending SIGTERM
Feb 27 01:53:27 [1630] fastvm-rhel-7-5b-93 stonith-ng:   notice: stonith_action_async_done:     Child process 1753 performing action 'monitor' timed out with signal 15
Feb 27 01:53:27 [1630] fastvm-rhel-7-5b-93 stonith-ng:  warning: log_action:    fence_fake2[1753] stderr: [ oiiiuuuu... ]
Feb 27 01:53:27 [1630] fastvm-rhel-7-5b-93 stonith-ng:  warning: log_action:    fence_fake2[1753] stderr: [ tttttttt... ]
Feb 27 01:53:27 [1630] fastvm-rhel-7-5b-93 stonith-ng:  warning: log_action:    fence_fake2[1753] stderr: [ tttttttt... ]
Feb 27 01:53:27 [1630] fastvm-rhel-7-5b-93 stonith-ng:  warning: log_action:    fence_fake2[1753] stderr: [ 34455566...
Feb 27 01:53:27 [1630] fastvm-rhel-7-5b-93 stonith-ng:   notice: log_operation: Operation 'monitor' [1753] for device 'test' returned: -62 (Timer expired)
Feb 27 01:53:27 [1631] fastvm-rhel-7-5b-93       lrmd:     info: log_finished:  finished - rsc:test action:start call_id:13  exit-code:1 exec-time:20005ms queue-time:0ms

=== Expected results:
Stonith device starts up and monitors properly showing a cut off version of stderr in logs.

=== Additional info:
This was originally encountered with 'fence_cisco_ucs' fence agent that can print the received data when attribute 'verbose=1' is specified to it. In some environments the output was beyond the 64KB line and instead of capturing the desired data that we planned stonith device has failed to start.

fence_cisco_ucs has also option to log the outputs into debug_file but this option works only when verbose=1 is enabled leading to inability to continuously capture the data if they exceed the limit of 64KB. This can be potentially improved within fence_cisco_ucs to allow capture without output to stderr.

The minimal fence agent 'fence_fake2' that is attached to this BZ can reproduce this behaviour. Removing one character from the output in the fence_fake2 will result in successful start/monitor operation of fencing devices. This looks as some kind of limit that was reached. 

Behaviour observed here is undesirable as when this happens in real deployment we don't know if we have reached the limit of stderr or some other timeout with fencing agent. Either if the output is not limited or the limit can be configured would be ideal, but also having some feedback from stonithd that this was the situation and not something else.

Comment 2 Ken Gaillot 2018-02-27 16:53:23 UTC
Pacemaker needs to use a resource agent or fence agent, it executes the agent in a new process with a pipe to communicate with it. Pipes can buffer a maximum of 64KB of written data before writes will block or fail.

With resource agents, Pacemaker reads output from the pipe in chunks as it is available. This avoids hitting that limit.

For whatever historical reason, Pacemaker does not do that with fence agents; it waits until the agent completes before trying to read all of its output at once. This causes the issue. This will have to be rewritten to read in chunks.

Comment 5 Klaus Wenninger 2018-12-20 14:56:13 UTC
Idea is to unify agent-calling between fenced (stonithd) & execd (lrmd) where this issue already has been taken care off.
This simplifies the code-base and thus enhances maintainablility.

Comment 6 Klaus Wenninger 2019-03-25 18:11:05 UTC
Fixed with https://github.com/ClusterLabs/pacemaker/pull/1728

Comment 8 Patrik Hagara 2019-07-01 13:09:36 UTC
before (1.1.19-8.el7):
======================

> [root@virt-037 ~]# wget -O fence_fake2 https://bugzilla.redhat.com/attachment.cgi?id=1401102
> [...]
> [root@virt-037 ~]# chmod +x fence_fake2
> [root@virt-037 ~]# for node in virt-038 virt-040 virt-041; do scp fence_fake2 $node:/usr/sbin/; ssh $node restorecon /usr/sbin/fence_fake2; done
> [...]
> [root@virt-037 ~]# ssh virt-038 rpm -q pacemaker
> pacemaker-1.1.19-8.el7.x86_64
> [root@virt-037 ~]# ssh virt-038 pcs stonith create test fence_fake2
> [root@virt-037 ~]# ssh virt-038 pcs status
> Cluster name: STSRHTS1140
> Stack: corosync
> Current DC: virt-041 (version 1.1.19-8.el7-c3c624ea3d) - partition with quorum
> Last updated: Mon Jul  1 14:59:20 2019
> Last change: Mon Jul  1 14:56:52 2019 by root via cibadmin on virt-038
> 
> 3 nodes configured
> 4 resources configured
> 
> Online: [ virt-038 virt-040 virt-041 ]
> 
> Full list of resources:
> 
>  fence-virt-038	(stonith:fence_xvm):	Started virt-038
>  fence-virt-040	(stonith:fence_xvm):	Started virt-040
>  fence-virt-041	(stonith:fence_xvm):	Started virt-041
>  test	(stonith:fence_fake2):	Stopped
> 
> Failed Actions:
> * test_start_0 on virt-038 'unknown error' (1): call=20, status=Timed Out, exitreason='',
>     last-rc-change='Mon Jul  1 14:56:54 2019', queued=0ms, exec=20038ms
> * test_start_0 on virt-040 'unknown error' (1): call=20, status=Timed Out, exitreason='',
>     last-rc-change='Mon Jul  1 14:57:14 2019', queued=0ms, exec=20018ms
> * test_start_0 on virt-041 'unknown error' (1): call=20, status=Timed Out, exitreason='',
>     last-rc-change='Mon Jul  1 14:57:34 2019', queued=0ms, exec=20019ms
> 
> 
> Daemon Status:
>   corosync: active/enabled
>   pacemaker: active/enabled
>   pcsd: active/enabled

result: start operation of the provided fence agents times out


after (1.1.20-5.el7):
=====================

> [root@virt-042 ~]# wget -O fence_fake2 https://bugzilla.redhat.com/attachment.cgi?id=1401102
> [...]
> [root@virt-042 ~]# chmod +x fence_fake2
> [root@virt-042 ~]# for node in virt-043 virt-044 virt-045; do scp fence_fake2 $node:/usr/sbin/; ssh $node restorecon /usr/sbin/fence_fake2; done
> [...]
> [root@virt-042 ~]# ssh virt-043 rpm -q pacemaker
> pacemaker-1.1.20-5.el7.x86_64
> [root@virt-042 ~]# ssh virt-043 pcs stonith create test fence_fake2
> [root@virt-042 ~]# ssh virt-043 pcs status                         
> Cluster name: STSRHTS20620
> Stack: corosync
> Current DC: virt-043 (version 1.1.20-5.el7-3c4c782f70) - partition with quorum
> Last updated: Mon Jul  1 15:06:08 2019
> Last change: Mon Jul  1 15:05:57 2019 by root via cibadmin on virt-043
> 
> 3 nodes configured
> 4 resources configured
> 
> Online: [ virt-043 virt-044 virt-045 ]
> 
> Full list of resources:
> 
>  fence-virt-043	(stonith:fence_xvm):	Started virt-043
>  fence-virt-044	(stonith:fence_xvm):	Started virt-044
>  fence-virt-045	(stonith:fence_xvm):	Started virt-045
>  test	(stonith:fence_fake2):	Started virt-043
> 
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/enabled

result: provided fence agent starts successfully


marking as verified in 1.1.20-5.el7

Comment 10 errata-xmlrpc 2019-08-06 12:53:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2129

Comment 11 Michele Baldessari 2019-10-12 07:10:00 UTC
*** Bug 1748811 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.