+++ This bug was initially created as a clone of Bug #1565909 +++
Description of problem:
Now that we've switched to adding a unique id to each record, if we do retry, we update the document with that id. So we don't get duplicates, but instead, we get a lot of deletions. A document deletion is an expensive operation.
We must change https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/configs.d/openshift/output-es-config.conf and https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/configs.d/openshift/output-es-ops-config.conf to add the parameter `write_operation` to use `create` instead of the default `index`: https://github.com/uken/fluent-plugin-elasticsearch/blob/v0.12/lib/fluent/plugin/out_elasticsearch.rb#L42
We must also have a CI test that can verify
- no duplicates
- no deletions
when the es plugin has to retry.
Version-Release number of selected component (if applicable):
3.10 and backports
Steps to Reproduce:
--- Additional comment from Peter Portante on 2018-04-10 23:43:00 EDT ---
See also https://www.elastic.co/guide/en/elasticsearch/reference/2.0/docs-bulk.html#docs-bulk, which describes in the behavior in the fourth paragraph:
The possible actions are index, create, delete and update.
"index" and "create" expect a source on the next line, and
have the same semantics as the op_type parameter to the
standard index API (i.e. "create" will fail if a document
with the same index and type exists already, whereas index
will add or replace a document as necessary).
--- Additional comment from Jeff Cantrill on 2018-04-11 12:33:10 EDT ---
The fix is in openshift3/logging-fluentd/images/v3.7.46-1. No regression error found. so move bug verified.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.