Description of problem: When the `buffer_chunk_size` in fluentd is configured the following warnings are written to the log file: 2020-11-20 09:26:28 +0000 [warn]: chunk bytes limit exceeds for an emitted event stream: 1471470bytes This is caused by the `read_lines_limit` setting in the `in_tail` plugin of fluentd. To avoid the warning message it is advised to set the `read_lines_limit` to a smaller value as can be found here. However this variable can not be set via the settings of the openshift-logging operator. For customers who need to set `buffer_chunk_size` to match their external backend it would be a benefit to avoid unnecessary warnings if they were able to configure this setting as well. Furthermore customers are experiencing data loss because of this. Version-Release number of selected component (if applicable): 4.6 How reproducible: 1. Install OCP logging 2. Set buffer_chunk_size to 1M Steps to Reproduce: 1. 2. 3. Actual results: Warning message occurs and log files are lost. Expected results: No data loss; no warning messages Additional info:
Moving the severity to High given there is a belief there is data loss
Verified on clusterlogging.4.6.0-202204072326. readLinesLimit and be specified on clusterlogging resources. apiVersion: logging.openshift.io/v1 kind: ClusterLogging metadata: name: instance namespace: openshift-logging spec: collection: logs: fluentd: {} type: fluentd forwarder: fluentd: buffer: chunkLimitSize: 1M inFile: readLinesLimit: 50
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.6.57 security and extras update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1622