Bug 1501948 - [3.5] default fluentd elasticsearch plugin request timeout too short by default, leads to potential log loss and stalled log flow
Summary: [3.5] default fluentd elasticsearch plugin request timeout too short by defau...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.5.0
Hardware: All
OS: All
unspecified
urgent
Target Milestone: ---
: 3.5.z
Assignee: Rich Megginson
QA Contact: Anping Li
URL:
Whiteboard:
Depends On: 1497836
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-13 14:36 UTC by Ruben Romero Montes
Modified: 2020-12-14 10:30 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Cause: If the logging system is under a heavy load, it may take longer than the 5 second timeout for Elasticsearch to respond, or it may respond with an error indicating that Fluentd needs to backoff. Consequence: In the former case, Fluentd will retry to send the records again, which can lead to having duplicate records. In the latter case, if Fluentd is unable to retry, it will drop records, leading to data loss. Fix: For the former case, the fix is to set the request_timeout to 10 minutes, so that Fluentd will wait up to 10 minutes for the reply from Elasticsearch before retrying the request. In the latter case, Fluentd will block attempting to read more input, until the output queues and buffers have enough room to write more data. Result: Greatly reduced chances of duplicate data (but not entirely eliminated). No data loss due to backpressure. Consequence: Fix: Result:
Clone Of: 1497836
Environment:
Last Closed: 2017-12-07 07:13:19 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin-aggregated-logging pull 707 0 None None None 2017-10-16 13:51:54 UTC
Red Hat Knowledge Base (Solution) 3214991 0 None None None 2017-10-13 14:36:28 UTC
Red Hat Product Errata RHSA-2017:3389 0 normal SHIPPED_LIVE Moderate: Red Hat OpenShift Enterprise security, bug fix, and enhancement update 2017-12-07 12:09:10 UTC

Description Ruben Romero Montes 2017-10-13 14:36:29 UTC
+++ This bug was initially created as a clone of Bug #1497836 +++

At times, logging from a particular namespace seems to have been stopped from the Kibana view of logs.

This can occur when a namespace's pods running on one or more nodes in the cluster are not being indexed into Elasticsearch by that node's fluentd process.

The fluentd process can get into this state when every attempt to write logs to an Elasticsearch instance takes longer than 5 seconds to complete.  Until a consistent number of writes takes less than 5 seconds to complete, logging essentially stops working.

This can lead to log loss, when containers come and go and fluentd fills up its internal queue and is then unable to read logs from the new containers.

The fix requires a change to the "request_timeout" parameter of the elasticsearch output plugin to set its value to some very high value (600 seconds, or 10 minutes, should be sufficient for most purposes), and the use of "buffer_queue_full_action" set to "block" to prevent the further records being read from log files and subsequently dropped on the floor.

Further, the use of a "flush_interval" of 1 second serves to ensure writes flow with minimal delay to Elasticsearch, keeping the size of the writes smaller from all nodes in the cluster, allowing for more overlap between log file reading and Elasticsearch writes.

--- Additional comment from Peter Portante on 2017-10-03 19:34:36 EDT ---

See also PR https://github.com/openshift/origin-aggregated-logging/pull/698 for a proposed set of changes upstream.

--- Additional comment from Jeff Cantrill on 2017-10-06 14:20:35 EDT ---

Closing in favor of the referenced trello card

--- Additional comment from Peter Portante on 2017-10-09 10:07:40 EDT ---

Why close this bug when we need a BZ to file this against 3.6.z, no?

Comment 2 Anping Li 2017-10-30 06:53:02 UTC
v3.5: The Fix is in logging-fluentd:3.5.0-38, The fluentd succeed to send about 440M logs in 3 hours. no 'Connection opened to Elasticsearch cluster' and exceptions was reported. The logs can be retrieved in kibana.
#openshift version
openshift v3.5.5.31.36
kubernetes v1.5.2+43a9be4
etcd 3.1.0

Images:
logging-elasticsearch:3.5.0-46
logging-kibana:3.5.0-43
logging-fluentd:3.5.0-38
logging-curator:3.5.5.31.36

Comment 5 errata-xmlrpc 2017-12-07 07:13:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3389


Note You need to log in before you can comment on or make changes to this bug.