Bug 1852341
| Summary: | 2020-06-03 08:34:06 +0000 [warn]: got unrecoverable error in primary and no secondary error_class=ArgumentError error="'Metadata' is not a designated severity" | ||||||
|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Oscar Casal Sanchez <ocasalsa> | ||||
| Component: | Logging | Assignee: | Brett Jones <brejones> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Giriyamma <gkarager> | ||||
| Severity: | high | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.4 | CC: | alchan, anli, aos-bugs, brejones, cruhm, gparente, jcantril, mas-hatada, mfuruta, naygupta, periklis, qitang, rh-container | ||||
| Target Milestone: | --- | ||||||
| Target Release: | 4.6.0 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | logging-exploration | ||||||
| Fixed In Version: | Doc Type: | No Doc Update | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | |||||||
| : | 1881996 (view as bug list) | Environment: | |||||
| Last Closed: | 2020-10-29 13:35:26 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | |||||||
| Bug Blocks: | 1881996 | ||||||
| Attachments: |
|
||||||
|
Comment 1
Brett Jones
2020-06-30 17:01:08 UTC
Hello Brett, When you are saying: "could you inspect and/or upload the bad chunk backups?". What are you referencing? To inspect the file: "/tmp/fluent/backup/worker0/object_3fdad2555694/5a84eb39a0aeb6b60cfbd17b0ddedd77.log", for example? Regards, Oscar Yes, I believe those files contain the log messages which are failing to parse. They would be helpful for debugging. Put UpcomingSprint label, pending reporter info to enable debugging bad blocks. Hello, sorry for the delay. We are still investigating the issue. We are trying to figure out why the log messages have strange sequences. The logs you uploaded seem to have random UTF sequences when they should be well formatted JSON. Whether or not that is the primary issue is unclear so far. Hello Brett, Thanks, let me know if you need something more to analyze the issue. I believe we have identified two different issues, but I need to clarify something from your configuration first. Do you have a syslog configuration for Log Forwarding? Could you please provide your full log forwarding configuration? If you do not have log forwarding configured for syslog output then we have a bug in configuration, because this error is from a syslog output which should not be in your config. The error "'Metadata' is not a designated severity" is a problem with the syslog output mapping a severity level from Metadata to one of the pre-defined syslog severities. Hello Brett,
About your questions:
- LogForwarding configuration is the same that I have described when opened the case:
~~~
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1alpha1
kind: LogForwarding
metadata:
creationTimestamp: "2020-07-20T15:24:32Z"
generation: 1
name: instance
namespace: openshift-logging
resourceVersion: "283540"
selfLink: /apis/logging.openshift.io/v1alpha1/namespaces/openshift-logging/logforwardings/instance
uid: a8a2ee11-da23-480c-8f77-f8e92b2f3313
spec:
outputs:
- endpoint: elasticsearch.openshift-logging.svc:9200
name: elasticsearch
secret:
name: fluentd
type: elasticsearch
pipelines:
- inputSource: logs.app
name: container-logs
outputRefs:
- elasticsearch
- inputSource: logs.infra
name: infra-logs
outputRefs:
- elasticsearch
- inputSource: logs.audit
name: audit-logs
outputRefs:
- elasticsearch
kind: List
metadata:
resourceVersion: ""
selfLink: ""
~~~
- The clusterLogging configuration is:
~~~
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
annotations:
clusterlogging.openshift.io/logforwardingtechpreview: enabled
creationTimestamp: "2020-07-20T15:24:01Z"
generation: 1
name: instance
namespace: openshift-logging
resourceVersion: "562635"
selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
uid: 5a2f681d-cea7-4939-a924-f0ac41178da8
spec:
collection:
logs:
fluentd: {}
type: fluentd
curation:
curator:
nodeSelector:
node-role.kubernetes.io/infra: ""
schedule: 30 3 * * *
type: curator
logStore:
elasticsearch:
nodeCount: 3
nodeSelector:
node-role.kubernetes.io/infra: ""
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: null
memory: 16Gi
requests:
cpu: 1
memory: 16Gi
storage:
size: 500Gi
storageClassName: cinder
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector:
node-role.kubernetes.io/infra: ""
replicas: 2
type: kibana
~~~
From your questions, it's important to know that it was configured firstly to send the logs outside without using LogForwarding and it had configured syslog following this part of the documentation "Forwarding logs using the syslog protocol" [1] where the syslog configmap is like this:
~~~
apiVersion: v1
data:
syslog.conf: |
<store>
@type syslog_buffered
remote_syslog syslog.example.com
port 514
hostname ${hostname}
remove_tag_prefix tag
tag_key ident,systemd.u.SYSLOG_IDENTIFIER
facility local0
severity info
use_record true
payload_key message
</store>
kind: ConfigMap
~~~
LogForwarding TP was activated to send only the audit logs outside and in that moment is when the messages below showed up:
~~~
2020-06-17 21:53:17 +0000 [warn]: got unrecoverable error in primary and no secondary error_class=ArgumentError error="'Metadata' is not a designated severity"
2020-06-17 21:53:17 +0000 [warn]: suppressed same stacktrace
2020-06-17 21:53:17 +0000 [warn]: bad chunk is moved to /tmp/fluent/backup/worker0/object_3fdad2555694/5a84eac703b35c5ca86fc1f80b3cbaa0.log
2020-06-17 21:54:17 +0000 [warn]: got unrecoverable error in primary and no secondary error_class=ArgumentError error="'Metadata' is not a designated severity"
2020-06-17 21:54:17 +0000 [warn]: suppressed same stacktrace
2020-06-17 21:54:17 +0000 [warn]: bad chunk is moved to /tmp/fluent/backup/worker0/object_3fdad2555694/5a84eb00687b4638e88c49c21c01e48d.log
2020-06-17 21:55:17 +0000 [warn]: got unrecoverable error in primary and no secondary error_class=ArgumentError error="'Metadata' is not a designated severity"
2020-06-17 21:55:17 +0000 [warn]: suppressed same stacktrace
2020-06-17 21:55:17 +0000 [warn]: bad chunk is moved to /tmp/fluent/backup/worker0/object_3fdad2555694/5a84eb39a0aeb6b60cfbd17b0ddedd77.log
~~~
[1] https://docs.openshift.com/container-platform/4.4/logging/config/cluster-logging-external.html#cluster-logging-collector-syslog_cluster-logging-external
Your log forwarding configuration is set to only forward logs to elasticsearch, but you have also created a syslog configmap. Am I understanding this correctly? Are you intending to have all logs sent to the elasticsearch and a copy of all logs sent to syslog as well? I would also like to point out that these are not errors, but they are warnings about these specific messages not being able to send to syslog, however the Elasticsearch output should not be affected. Can you confirm that you are seeing those audit logs in Kibana/Elasticsearch? Hello Brett, You are in the correct, all the logs (infra, apps and audit) are sent to the internal Elasticsearch using Log Forwarding and adding to it, it's configured syslog to send the logs outside. The Log Forwarding has not the option to send the logs outside using syslog until OCP 4.5, but not in previous versions. I'll confirm if the messages are seen in Elasticsearch. But, even, if they are seen in Elasticsearch, if they are not send all the logs sent outside using syslog, it should be fixed. Let me confirm it what you are asking. Thanks in advance, Oscar Hello Brett, The logs are sent to the internal Elasticsearch and at the same time they are sent to an external syslog using the configMap. Before OCP 4.5, it was not supported syslog as endpoint in LogForwarding and even in OCP 4.5 it's not documented like do it (I have opened the BZ#1856703 related to this lack of the documentation about how to use syslog with LogForwarding days ago). At this moment, not visible logs audit logs in kibana. Let me know the next steps, I'll try to reproduce it too. Regards, Oscar Hello Oscar, Thanks for your patience in reviewing this. I would like to point out that there is an upcoming fix [1] that will disable this functionality you are using. The syslog output [2] alongside Log Forwarding is not an intended feature as the two are meant to be exclusive. You mentioned before that this issue only arose once you enabled Log Forwarding TP. The intended use of Log Forwarding is to replace the syslog configmap instead of running along side it. [1] https://github.com/openshift/cluster-logging-operator/pull/604 [2] https://docs.openshift.com/container-platform/4.4/logging/config/cluster-logging-external.html#cluster-logging-collector-syslog_cluster-logging-external Hello Brett, Thank you so much for your feedback. From your words, then, I can understand that it's a bug and the PR [1] that you are commenting is related to the BZ#1858200 that I have opened in the past :-). But, the BZ#1858200 was not opened related to LogForwarding and it's not in the middle to be able to reproduce, then, I don't know really like is related the PR [1] and the BZ#1858200, but I'll ask this question in the other side. Let me verify if we can disable LogForwarding and the error disappear from the logs before closing this bugzilla and thank you so much for your time. [1] https://github.com/openshift/cluster-logging-operator/pull/604 [2] https://bugzilla.redhat.com/show_bug.cgi?id=1858200 Yes I think it's not precisely related to the other BZ#1858200 but my point is that LF and syslog configmap should not be both enabled. I only suggested this because you originally said "LogForwarding TP was activated to send only the audit logs outside and in that moment is when the messages below showed up" and I only mean that since the two aren't intended to coexist this may not be a relevant issue. I think my questions are now: 1. Does syslog configmap produce this error without enabling Log Forwarding? 2. Does enabling Log Forwarding and disabling the syslog configmap produce this warning? 3. After BZ#1858200 has been fixed can this warning be reproduced? Hello Brett, We'll check the next steps and I'll update you with them if we are able to reproduce them. Thanks in advance. Oscar Hello Oscar, Were you able to get any more information? Hello Brett, Sure, I was testing with customer and the syslog configmap was disabled and the issues dissapeared. Then, as you were supposing it's the PR done for fixing BZ#1858200, even when from the Bugzilla description is very different. Thank you so much for your help. Regards, Oscar *** This bug has been marked as a duplicate of bug 1858200 *** Still hit this issue when verified bug 1858200. We think this issue is dependent on LF while 1858200 wasn't. so we closed the bug 1858200 and reopen [1] to further investigation. I see, the `use_record true` setting is trying to use the "level" from the k8s audit log which is Metadata [1]. I believe we need to relabel k8s audit logs "level" field to something like "k8s_audit_level" so that fluentd doesn't try to use it as the severity and we can retain the original information. [1]: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/#audit-policy Hello Brett, It makes sense what you are commenting and that this reported bug is independent from bug bug 1858200. Thank you so much for going further in your investigations. Regards, Oscar *** Bug 1867053 has been marked as a duplicate of this bug. *** Created attachment 1712809 [details]
fluentd.conf
fluent report error below.
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: emit transaction failed: error_class=ArgumentError error="'Metadata' is not a designated severity" location="/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/syslog_protocol-0.9.2/lib/syslog_protocol/packet.rb:100:in `severity='" tag="openshift-audit.log"
2020-08-27 11:22:48 +0000 [warn]: suppressed same stacktrace
2020-08-27 11:22:48 +0000 [warn]: emit transaction failed: error_class=ArgumentError error="'Metadata' is not a designated severity" location="/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/syslog_protocol-0.9.2/lib/syslog_protocol/packet.rb:100:in `severity='" tag="openshift-audit.log"
2020-08-27 11:22:48 +0000 [warn]: suppressed same stacktrace
2020-08-27 11:22:48 +0000 [warn]: emit transaction failed: error_class=ArgumentError error="'Metadata' is not a designated severity" location="/opt/rh/rh-ruby25/root/usr/local/share/gems/gems/syslog_protocol-0.9.2/lib/syslog_protocol/packet.rb:100:in `severity='" tag="openshift-audit.log"
2020-08-27 11:22:48 +0000 [warn]: suppressed same stacktrace
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
2020-08-27 11:22:48 +0000 [warn]: unknown placeholder `${record['level']}` found
I hit similar issue when testing forward logs to syslog using clusterlogforwarder:
$ oc logs -n openshift-logging fluentd-5ggx8
Setting each total_size_limit for 1 buffers to 19242572390 bytes
Setting queued_chunks_limit_size for each buffer to 2293
Setting chunk_limit_size for each buffer to 8388608
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: /etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'
2020-09-02 08:06:33 +0000 [warn]: /etc/fluent/plugin/filter_parse_json_field.rb:81:in `each'
2020-09-02 08:06:33 +0000 [warn]: /etc/fluent/plugin/filter_parse_json_field.rb:81:in `do_replace_json_log'
2020-09-02 08:06:33 +0000 [warn]: /etc/fluent/plugin/filter_parse_json_field.rb:52:in `block in filter_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event.rb:197:in `block in each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event.rb:196:in `each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event.rb:196:in `each'
2020-09-02 08:06:33 +0000 [warn]: /etc/fluent/plugin/filter_parse_json_field.rb:51:in `filter_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:178:in `block in filter_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:178:in `each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:178:in `reduce'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:178:in `filter_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:159:in `emit_events'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:97:in `emit_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_relabel.rb:29:in `process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/output.rb:789:in `emit_sync'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:58:in `block in process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:56:in `each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:56:in `each_with_index'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:56:in `process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/multi_output.rb:148:in `emit_sync'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:160:in `emit_events'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:97:in `emit_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_relabel.rb:29:in `process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/output.rb:789:in `emit_sync'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:58:in `block in process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:56:in `each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:56:in `each_with_index'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_copy.rb:56:in `process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/multi_output.rb:148:in `emit_sync'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:97:in `emit_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/out_relabel.rb:29:in `process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/output.rb:789:in `emit_sync'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:160:in `emit_events'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/event_router.rb:97:in `emit_stream'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:410:in `receive_lines'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:528:in `wrap_receive_lines'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:762:in `block in handle_notify'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:806:in `with_io'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:742:in `handle_notify'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:738:in `block in on_notify'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:738:in `synchronize'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:738:in `on_notify'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:558:in `on_notify'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:532:in `attach'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:281:in `setup_watcher'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:312:in `block in start_watchers'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:298:in `each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:298:in `start_watchers'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:275:in `refresh_watchers'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/plugin/in_tail.rb:203:in `start'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/root_agent.rb:203:in `block in start'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/root_agent.rb:192:in `block (2 levels) in lifecycle'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/root_agent.rb:191:in `each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/root_agent.rb:191:in `block in lifecycle'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/root_agent.rb:178:in `each'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/root_agent.rb:178:in `lifecycle'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/root_agent.rb:202:in `start'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/engine.rb:274:in `start'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/engine.rb:219:in `run'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/supervisor.rb:816:in `run_engine'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/supervisor.rb:559:in `block in run_worker'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/supervisor.rb:741:in `main_process'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/supervisor.rb:554:in `run_worker'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/lib/fluent/command/fluentd.rb:330:in `<top (required)>'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/share/rubygems/rubygems/core_ext/kernel_require.rb:59:in `require'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/share/rubygems/rubygems/core_ext/kernel_require.rb:59:in `require'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.7.4/bin/fluentd:8:in `<top (required)>'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/bin/fluentd:23:in `load'
2020-09-02 08:06:33 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/bin/fluentd:23:in `<main>'
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: suppressed same stacktrace
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: suppressed same stacktrace
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: suppressed same stacktrace
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: suppressed same stacktrace
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: suppressed same stacktrace
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: suppressed same stacktrace
2020-09-02 08:06:33 +0000 [warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'" tag="openshift-audit.log"
2020-09-02 08:06:33 +0000 [warn]: suppressed same stacktrace
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
2020-09-02 08:06:33 +0000 [warn]: unknown placeholder `${level}` found
CLF:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: rsyslog-created-by-user
type: syslog
syslog:
facility: local0
rfc: RFC3164
severity: informational
url: 'tcp://rsyslogserver.syslog.svc:514'
pipelines:
- name: forward-audit-to-external-syslog
inputRefs:
- audit
labels:
logforwarder: audit-log
outputRefs:
- rsyslog-created-by-user
- name: forward-infra-to-external-syslog
inputRefs:
- infrastructure
labels:
logforwarder: infra-log
outputRefs:
- rsyslog-created-by-user
- name: forward-app-to-external-syslog
inputRefs:
- application
labels:
logforwarder: app-log
outputRefs:
- rsyslog-created-by-user
CLO: ose-cluster-logging-operator-v4.6.0-202009011832.p0
With the above error logs in the fluentd, the app and infra logs could be forwarded, but the audit logs couldn't. Move to high, as the message "the unknown placeholder `${level}` found" flush anywhere when audit log are enabled.
Verified this bug on cluster 4.6.0-0.nightly-2020-09-20-022022.
Not seeing the below errors/warn messages while forwarding logs to syslog using clusterlogforwarder anymore, audit logs could be shipped to syslog successfully.
-[warn]: emit transaction failed: error_class=NoMethodError error="undefined method `strip' for nil:NilClass" location="/etc/fluent/plugin/filter_parse_json_field.rb:83:in `block in do_replace_json_log'"
tag="openshift-audit.log"
- [warn]: unknown placeholder `${record['level']}` found
- [warn]: got unrecoverable error in primary and no secondary error_class=ArgumentError error="'Metadata' is not a designated severity"
2020-06-17 21:53:17 +0000 [warn]: suppressed same stacktrace
CLF used:
apiVersion: logging.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
outputs:
- name: rsyslog-created-by-user
type: syslog
syslog:
facility: local0
rfc: RFC3164
severity: informational
url: 'tcp://rsyslogserver.openshift-logging.svc:514'
pipelines:
- name: forward-audit-to-external-syslog
inputRefs:
- audit
labels:
logforwarder: audit-log
outputRefs:
- rsyslog-created-by-user
- name: forward-infra-to-external-syslog
inputRefs:
- infrastructure
labels:
logforwarder: infra-log
outputRefs:
- rsyslog-created-by-user
- name: forward-app-to-external-syslog
inputRefs:
- application
labels:
logforwarder: app-log
outputRefs:
- rsyslog-created-by-user
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4198 This issue is already been resolved in 4.6 via ERRATA. If you need to report a similar or same issue, please create a new issue for tracking (In reply to naygupta from comment #42) > Do we have any idea in which future release this issue is planned to fix? Please read the bug details as it plainly identifies this issue blocks https://bugzilla.redhat.com/show_bug.cgi?id=1881996 which is linked to 4.5 > Do we have any workaround for this issue meanwhile? Ref https://bugzilla.redhat.com/show_bug.cgi?id=1881996 which also states it has been closed errata (In reply to Jeff Cantrill from comment #43) > This issue is already been resolved in 4.6 via ERRATA. If you need to > report a similar or same issue, please create a new issue for tracking We already have Bug 1892511 for it. Please handle it on Bug 189251. By the way, Bug 189251 is very critical even though its target was set to OCP4.7. We hope that Red Hat will fix it even older than OCP4.7. |