Description of problem: When Fluentd receives an unexpected error from Elasticsearch (as opposed to ElasticsearchOutOfMemory or BulkIndexQueueFull which are "expected"), Fluentd will log the entire response. This is unhelpful. All errors should be handled like ElasticsearchOutOfMemory and BulkIndexQueueFull errors. The user can always turn on debug logging to get more detailed information about errors if the problem is persistent. Version-Release number of selected component (if applicable): fluent-plugin-elasticsearch-1.13.2 (and .3) How reproducible: When the bulk index request times out or returns some error other than ElasticsearchOutOfMemory or BulkIndexQueueFull. Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
We saw 23,000+ character log lines in fluentd pods which contain the full dump of the error response from Elasticsearch when there is a timeout creating an index. If the fluentd plugin only emitted the JSON blobs that actually contain an error from the response payload, that would be helpful. But agree the full payload is pretty useless. If the fluentd plugin could recognize an index creation timeout, and handle it just like it does bulk request rejected errors, that would also be helpful.
fixed with merge of https://github.com/uken/fluent-plugin-elasticsearch/pull/399 and release of v1.15.0
The fluentd works well with the fix with logging:v3.9.27. No regression error found, so move bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1566