Bug 1501948

Summary: [3.5] default fluentd elasticsearch plugin request timeout too short by default, leads to potential log loss and stalled log flow
Product: OpenShift Container Platform Reporter: Ruben Romero Montes <rromerom>
Component: LoggingAssignee: Rich Megginson <rmeggins>
Status: CLOSED ERRATA QA Contact: Anping Li <anli>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 3.5.0CC: anli, aos-bugs, jcantril, pportant, rmeggins, rromerom, tkatarki, xtian
Target Milestone: ---   
Target Release: 3.5.z   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Cause: Cause: If the logging system is under a heavy load, it may take longer than the 5 second timeout for Elasticsearch to respond, or it may respond with an error indicating that Fluentd needs to backoff. Consequence: In the former case, Fluentd will retry to send the records again, which can lead to having duplicate records. In the latter case, if Fluentd is unable to retry, it will drop records, leading to data loss. Fix: For the former case, the fix is to set the request_timeout to 10 minutes, so that Fluentd will wait up to 10 minutes for the reply from Elasticsearch before retrying the request. In the latter case, Fluentd will block attempting to read more input, until the output queues and buffers have enough room to write more data. Result: Greatly reduced chances of duplicate data (but not entirely eliminated). No data loss due to backpressure. Consequence: Fix: Result:
Story Points: ---
Clone Of: 1497836 Environment:
Last Closed: 2017-12-07 07:13:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1497836    
Bug Blocks:    

Description Ruben Romero Montes 2017-10-13 14:36:29 UTC
+++ This bug was initially created as a clone of Bug #1497836 +++

At times, logging from a particular namespace seems to have been stopped from the Kibana view of logs.

This can occur when a namespace's pods running on one or more nodes in the cluster are not being indexed into Elasticsearch by that node's fluentd process.

The fluentd process can get into this state when every attempt to write logs to an Elasticsearch instance takes longer than 5 seconds to complete.  Until a consistent number of writes takes less than 5 seconds to complete, logging essentially stops working.

This can lead to log loss, when containers come and go and fluentd fills up its internal queue and is then unable to read logs from the new containers.

The fix requires a change to the "request_timeout" parameter of the elasticsearch output plugin to set its value to some very high value (600 seconds, or 10 minutes, should be sufficient for most purposes), and the use of "buffer_queue_full_action" set to "block" to prevent the further records being read from log files and subsequently dropped on the floor.

Further, the use of a "flush_interval" of 1 second serves to ensure writes flow with minimal delay to Elasticsearch, keeping the size of the writes smaller from all nodes in the cluster, allowing for more overlap between log file reading and Elasticsearch writes.

--- Additional comment from Peter Portante on 2017-10-03 19:34:36 EDT ---

See also PR https://github.com/openshift/origin-aggregated-logging/pull/698 for a proposed set of changes upstream.

--- Additional comment from Jeff Cantrill on 2017-10-06 14:20:35 EDT ---

Closing in favor of the referenced trello card

--- Additional comment from Peter Portante on 2017-10-09 10:07:40 EDT ---

Why close this bug when we need a BZ to file this against 3.6.z, no?

Comment 2 Anping Li 2017-10-30 06:53:02 UTC
v3.5: The Fix is in logging-fluentd:3.5.0-38, The fluentd succeed to send about 440M logs in 3 hours. no 'Connection opened to Elasticsearch cluster' and exceptions was reported. The logs can be retrieved in kibana.
#openshift version
openshift v3.5.5.31.36
kubernetes v1.5.2+43a9be4
etcd 3.1.0

Images:
logging-elasticsearch:3.5.0-46
logging-kibana:3.5.0-43
logging-fluentd:3.5.0-38
logging-curator:3.5.5.31.36

Comment 5 errata-xmlrpc 2017-12-07 07:13:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:3389