Bug 1422008 previously closed without resolution. Customer reporting issue persists. Requesting new RFE.
Description of problem:
Long lines read by fluentd from the Docker logs are split into several documents sent to Elasticsearch.
The max size of the message seems to be 16KB therefore for a message of 85KB the result is that 6 messages were created in different chunks.
Fluentd is configured with the default configuration (docker json-file log driver).
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1. oc debug dc/cakephp
2. generate a file with all the content (attached) in a single line.
3. cat longlog.txt
The message is split into 6 messages visible from Kibana
A single message should have been generated
* I have tried to put the document into Elasticsearch manually and it is not split
* oc logs don't show anything
* fluentd logs don't show anything
* docker logs show the entire message
Previous RFE 1422008 closed without resolution - https://bugzilla.redhat.com/show_bug.cgi?id=1422008
The original issue with docker was that it was running OOM when logging because there was no upper limit on the size of a log entry: https://github.com/moby/moby/issues/18057 so a hard coded limit of 16k was used.
There were various proposals to make the size configurable: https://github.com/moby/moby/issues/34855 and https://github.com/moby/moby/issues/32923#issuecomment-299334898 which were rejected by docker/moby upstream.
We might be able to use https://github.com/fluent-plugins-nursery/fluent-plugin-concat to join split records into a single record.
The docker/moby team also suggest that we write our own plugin that would allow a much higher limit.
Fixed for CRIO use in 3.11 in https://bugzilla.redhat.com/show_bug.cgi?id=1552304. Closing CURRENTRELEASE with no intention to resolve specifically for docker