Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1565214 - omelasticsearch needs better handling for bulk index rejections and other errors
Summary: omelasticsearch needs better handling for bulk index rejections and other errors
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: rsyslog
Version: 7.6
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: ---
Assignee: Rich Megginson
QA Contact: Anping Li
Depends On:
TreeView+ depends on / blocked
Reported: 2018-04-09 16:08 UTC by Rich Megginson
Modified: 2018-10-30 10:18 UTC (History)
11 users (show)

Fixed In Version: rsyslog-8.24.0-24.el7
Doc Type: No Doc Update
Doc Text:
Clone Of:
Last Closed: 2018-10-30 10:17:13 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github rsyslog rsyslog-doc pull 675 0 None closed add writeoperation, retryfailures, retryruleset - and retry configuration 2020-01-27 13:22:18 UTC
Github rsyslog rsyslog pull 2733 0 None closed omelasticsearch: write op types; bulk rejection retries 2020-01-27 13:22:19 UTC
Red Hat Product Errata RHEA-2018:3135 0 None None None 2018-10-30 10:18:20 UTC

Description Rich Megginson 2018-04-09 16:08:06 UTC
Description of problem:
The current upstream fluent-plugin-elasticsearch handles bulk index rejections by retrying them, and also retrying other errors.  We might be able to do better in rsyslog:

- identify errors that can be retried, such as bulk index rejections, sleep, then resubmit those records to the rsyslog queue
- identify errors that cannot be retried (e.g. schema violations) and write those records to some sort of "dead letter queue" - a local file (could also be the omelasticsearch errorfile)

We need some way to associate the errors with the records, so we will need also to implement support for a unique record identifier.

- rsyslog assigns a unique id to each record
- omelasticsearch plugin uses this as the _id field
- omelasticsearch plugin assembles a bulk index request from many records and submits
- omelasticsearch processes the response from elasticsearch
- if the error is transient, find the original record with the id in the error response and resubmit it
- if the error is permanent, find the original record with the id in the error repsonse and write it to the error file

the trick is mapping the id in the error response back to the original record.  We might need some sort of hash table in omelasticsearch - the key is the _id, and the value is the original message.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

This is a must-have feature for common logging and openshift logging.

Comment 13 Rich Megginson 2018-07-24 22:05:50 UTC
@schituku - https://github.com/openshift/origin-aggregated-logging/pull/1259
This configures rsyslog to retry failed elasticsearch operations, and write any "hard" errors to 

Comment 15 Anping Li 2018-08-27 05:22:35 UTC
Verified and pass with rsyslog-elasticsearch-8.24.0-33.el7.x86_64, atomic-openshift-3.11.0-0.19.0.  The rejected logs are sent to Elasiticsearch again.

Comment 18 errata-xmlrpc 2018-10-30 10:17:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.