Bug 1501948
| Summary: | [3.5] default fluentd elasticsearch plugin request timeout too short by default, leads to potential log loss and stalled log flow | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Ruben Romero Montes <rromerom> |
| Component: | Logging | Assignee: | Rich Megginson <rmeggins> |
| Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
| Severity: | urgent | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.5.0 | CC: | anli, aos-bugs, jcantril, pportant, rmeggins, rromerom, tkatarki, xtian |
| Target Milestone: | --- | ||
| Target Release: | 3.5.z | ||
| Hardware: | All | ||
| OS: | All | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Bug Fix | |
| Doc Text: |
Cause: Cause: If the logging system is under a heavy load, it may take longer than the 5 second timeout for Elasticsearch to respond, or it may respond with an error indicating that Fluentd needs to backoff.
Consequence: In the former case, Fluentd will retry to send the records again, which can lead to having duplicate records. In the latter case, if Fluentd is unable to retry, it will drop records, leading to data loss.
Fix: For the former case, the fix is to set the request_timeout to 10 minutes, so that Fluentd will wait up to 10 minutes for the reply from Elasticsearch before retrying the request. In the latter case, Fluentd will block attempting to read more input, until the output queues and buffers have enough room to write more data.
Result: Greatly reduced chances of duplicate data (but not entirely eliminated). No data loss due to backpressure.
Consequence:
Fix:
Result:
|
Story Points: | --- |
| Clone Of: | 1497836 | Environment: | |
| Last Closed: | 2017-12-07 07:13:19 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 1497836 | ||
| Bug Blocks: | |||
|
Description
Ruben Romero Montes
2017-10-13 14:36:29 UTC
v3.5: The Fix is in logging-fluentd:3.5.0-38, The fluentd succeed to send about 440M logs in 3 hours. no 'Connection opened to Elasticsearch cluster' and exceptions was reported. The logs can be retrieved in kibana. #openshift version openshift v3.5.5.31.36 kubernetes v1.5.2+43a9be4 etcd 3.1.0 Images: logging-elasticsearch:3.5.0-46 logging-kibana:3.5.0-43 logging-fluentd:3.5.0-38 logging-curator:3.5.5.31.36 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3389 |