Bug 1515715 - [RFE]No record generator information in rsyslog server
Summary: [RFE]No record generator information in rsyslog server
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.7.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.9.z
Assignee: Noriko Hosoi
QA Contact: Anping Li
URL:
Whiteboard:
: 1543761 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-21 09:27 UTC by Anping Li
Modified: 2021-09-09 12:50 UTC (History)
8 users (show)

Fixed In Version: logging-fluentd-docker-v3.9.22-2
Doc Type: Enhancement
Doc Text:
To send necessary information to the remote syslog, the following changes were made. If setting the directive use_record to true (false, by default) in /etc/fluent/configs.d/dynamic/output-remote-syslog.conf, 1) hostname in the fluentd record is forwarded to the remote syslog. Otherwise, the hostname of the fluentd is forwarded. 2-1) application logs: including "container_name", "namespace_name", and "pod_name" in the output content facility - configured value in output-remote-syslog.conf is set. 'user' by default severity - configured value in output-remote-syslog.conf is set. 'info' by default 2-2) operation logs: facility - set record[systemd][u][SYSLOG_FACILITY] to facility if available severity - set record[level] to severity if available 3) The directive tag_key improvement. . tag_key takes multiple values. e.g., tag_key ident,SYSLOG_IDENTIFIER . tag_key takes the dot formatted nested tag key. e.g., tag_key systemd.u.SYSLOG_IDENTIFIER Notes: If the tag_key is not set, the fluentd tag ("output_ops_tag" for operation logs; "output_tag" for container logs) is sent to rsyslog. When a tag_key is specified and the value is found in the record key, the record value is used for the tag. E.g., tag_key ident record['ident'] == "myTag" then, "myTag" is set to tag in the packet to be sent to rsyslog. If multiple tag_key values are configured, the first hit one is picked up and the rest is ignored even if it's found in the record. E.g., tag_key systemd.u.SYSLOG_IDENTIFIER,ident record['systemd']['u']['SYSLOG_IDENTIFIER'] == "myTag0" record['ident'] == "myTag1" then, "myTag0" is set to tag in the packet to be sent to rsyslog. If none of tag_key value does not hit, it falls back to the default behaviour and output_ops_tag for operation and output_tag for container logs are sent. Sample logs in /var/log/messages when use_record is set to true. Log test message by logger: rsyslogTestMessage-20180215-145124 3,17,Feb 15 14:52:25,ip-172-18-5-234.ec2.internal,rsyslogTestTag:, rsyslogTestMessage-20180215-145124 Log test message by kibana: testKibanaMessage-20180215-145225 6,16,Feb 15 14:52:41,ip-172-18-5-234.ec2.internal,output_tag:, namespace_name=logging, container_name=kibana, pod_name=logging-kibana-1-qr6pl, message=GET /testKibanaMessage-20180215-145225 404 4ms - 9.0B
Clone Of:
Environment:
Last Closed: 2018-08-09 22:13:46 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin-aggregated-logging pull 887 0 'None' closed Bug 1515715 - No record generator information in rsyslog server 2021-01-08 22:35:16 UTC
Red Hat Product Errata RHBA-2018:2335 0 None None None 2018-08-09 22:14:25 UTC

Description Anping Li 2017-11-21 09:27:08 UTC
Description of problem:
The fluent rsyslog plugin is using the pod name as hostname. if we enabled rsyslog for mux pod, all records are using mux pod name as hostname.   and the process name(ID) tag also been removed.  so we couldn't find the record's generator.

Version-Release number of selected component (if applicable):
It is no too bad with container's log, for the pod 's ip in container's log, 

How reproducible:
always

Steps to Reproduce:
1) Install rsyslog server

2) deploy logging with mux and rsyslog
openshift_logging_install_logging=true
openshift_logging_use_mux=true
openshift_logging_mux_client_mode=maximal
openshift_logging_mux_remote_syslog=true
openshift_logging_mux_remote_syslog_host=192.168.1.221

3) Check the records in /var/log/messages and rsyslog server

4) check the container logs and rsyslog server

Actual results:
Step 3. The hostname is logging-mux pod name and the tag is not in the rsyslog, we couldn't find the record's generator

#3.1) /var/log/messages

Nov 21 03:46:08 openshift-210 dnsmasq[49615]: using nameserver 127.0.0.1#53 for domain in-addr.arpa
Nov 21 03:46:08 openshift-210 dnsmasq[49615]: using nameserver 127.0.0.1#53 for domain cluster.local

#3.2) In rsyslog server files

2017-11-21T04:03:46-05:00 logging-mux-6-9v8sq output_tag: using nameserver 127.0.0.1#53 for domain in-addr.arpa
2017-11-21T04:03:46-05:00 logging-mux-6-9v8sq output_tag: using nameserver 127.0.0.1#53 for domain cluster.local

Step 4  The hostname is logging-mux pod name and the 

#4.1)  in container logs
{"log":"10.130.0.1 - - [21/Nov/2017:09:21:46 +0000] \"GET /health.php HTTP/1.1\" 200 2 \"-\" \"Go-http-client/1.1\"\n","stream":"stdout","time":"2017-11-21T09:21:46.389091233Z"}
{"log":"10.130.0.1 - - [21/Nov/2017:09:21:46 +0000] \"GET /health.php HTTP/1.1\" 200 2 \"-\" \"Go-http-client/1.1\"\n","stream":"stdout","time":"2017-11-21T09:21:46.389341534Z"}
{"log":"10.130.0.1 - - [21/Nov/2017:09:21:56 +0000] \"GET /health.php HTTP/1.1\" 200 2 \"-\" \"Go-http-client/1.1\"\n","stream":"stdout","time":"2017-11-21T09:21:56.389292661Z"}

#4.2) in rsyslog server
2017-11-21T04:04:46-05:00 logging-mux-6-9v8sq output_tag: 10.130.0.1 - - [21/Nov/2017:09:03:56 +0000] "GET /health.php HTTP/1.1" 200 2 "-" "Go-http-client/1.1"
2017-11-21T04:04:46-05:00 logging-mux-6-9v8sq output_tag: 10.130.0.1 - - [21/Nov/2017:09:03:56 +0000] "GET /health.php HTTP/1.1" 200 2 "-" "Go-http-client/1.1"

Expected results:
For system log, the hostname should be the nodename, the process name[id] should be kept by default

For container log, the hostname should be container's name.

Additional info:

Comment 1 Noriko Hosoi 2018-01-09 20:42:07 UTC
Hi @Anping,

Could you please attach the fluentd syslog config file /etc/fluent/configs.d/dynamic/output-remote-syslog.conf?

The file should have remote_syslog config param, which value is supposed to be the one you passed with openshift_logging_mux_remote_syslog_host.

When I run ansible with the option "-e openshift_logging_mux_remote_syslog_host=10.11.12.13", the value is passed to the config file as expected.

output-remote-syslog.conf: 
## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
remote_syslog 10.11.12.13
port 514
hostname logging-mux-2-bqnpx
facility local0
severity debug
</store>

Note: I'm adding this test case to remote-syslog.sh https://github.com/openshift/origin-aggregated-logging/pull/887

And I don't see any difference between mux and the standalone fluentd in this aspect.  Do you observe the problem just in mux, not in fluentd?

Thanks!

Comment 2 Anping Li 2018-01-11 10:56:10 UTC
sh-4.2# cat output-remote-syslog.conf 
## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
remote_syslog 172.16.120.22
port 514
hostname logging-mux-2-xqbhj
facility local0
severity debug
</store>



My test steps are as following
1. Write log in pods
oc debug ${pod_name}
# echo anli_print_message_in_container_${pod_name}-debug

2. Search the message in rsyslog server

# grep -r 'anli_print_message_in_container_${pod_name}-debug' 172.16.120.87.log

Jan 11 10:41:29 logging-mux-2-xqbhj output_tag: anli_print_message_in_container_nodejs-mongodb-example-1-sjr8k-debug#015


The message layout  is like  | $time | $mux_pod_name |output_tag | $message |. We couldn't find the generator information of the record. I expected the record include the $pod_name.

Comment 3 Noriko Hosoi 2018-01-11 20:38:51 UTC
Thanks for the details, @Anping.

The problem is the fluentd remote-syslog plugin has no ability to forward all the info to rsyslogd...  It is not a mux only issue, but in general.

Here's an example.  
If I run "logger -i -p local4.info -t testTag0 testMessage0", it's logged by journald like this:
Jan 11 18:43:10 ip-172-18-5-6.ec2.internal testTag0[15202]: testMessage0

Fluentd/Mux collects and aggregates the info.  In total, the following data are available for the log.  But, the remote-syslog plugin only takes one payload_key, which is "message" by default.  That forces to drop all the other valuable data as you pointed out.

  "_source": {
      "@timestamp": "2018-01-11T18:43:10.936032+00:00",
      "hostname": "ip-172-18-5-6.ec2.internal",
      "level": "info",
      "message": "testMessage0",
      "pipeline_metadata": {
          "collector": {
              "inputname": "fluent-plugin-systemd",
              "ipaddr4": "10.128.0.75",
              "ipaddr6": "fe80::287a:7ff:fe33:b3e2",
              "name": "fluentd",
              "received_at": "2018-01-11T18:43:11.204654+00:00",
              "version": "0.12.42 1.6.0"
          },  
          "normalizer": {
              "inputname": "fluent-plugin-systemd",
              "ipaddr4": "10.128.0.74",
              "ipaddr6": "fe80::858:aff:fe80:4a",
              "name": "fluentd",
              "received_at": "2018-01-11T18:43:20.173884+00:00",
              "version": "0.12.42 1.6.0"
          }   
      },
      "systemd": {
          "t": {
              "BOOT_ID": "3d94140e774e4ef08f39092fb8c53ef7",
              "GID": "0",
              "MACHINE_ID": "4372f1e2f8c642d3a2f3ed11aa3fe654",
              "PID": "15202",
              "SELINUX_CONTEXT": "unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
              "TRANSPORT": "syslog",
              "UID": "0" 
          },  
          "u": {
              "SYSLOG_FACILITY": "20",
              "SYSLOG_IDENTIFIER": "testTag0",
              "SYSLOG_PID": "15202"
          }   
      }
  },

I'd think this is a limitation of the current fluentd remote-syslog plugin, and we should convert this bug as RFE bug to extend payload_key to have multiple fields (or convert the type to array).

Comment 4 Anping Li 2018-01-12 00:49:18 UTC
@Noriko Hosoi, Agreen, Add REF tag in title.

Comment 5 openshift-github-bot 2018-01-13 05:02:32 UTC
Commits pushed to master at https://github.com/openshift/origin-aggregated-logging

https://github.com/openshift/origin-aggregated-logging/commit/0b1b964250899ea5cb9339bfd18018b5025da131
Bug 1515715 - No record generator information in rsyslog server

Adding a test 6, verify openshift_logging_mux_remote_syslog_host is respected in the mux pod

https://github.com/openshift/origin-aggregated-logging/commit/f756e1428a4b41dbfe9d18355ea1538801102d30
Merge pull request #887 from nhosoi/bz1515715

Automatic merge from submit-queue.

Bug 1515715 - No record generator information in rsyslog server

Comment 13 Noriko Hosoi 2018-02-22 00:44:05 UTC
PR/909 was closed in favour of:
https://github.com/openshift/origin-aggregated-logging/pull/955

Comment 14 Noriko Hosoi 2018-03-08 01:51:26 UTC
*** Bug 1543761 has been marked as a duplicate of this bug. ***

Comment 18 Anping Li 2018-05-02 11:47:13 UTC
Couldn't see records in rsyslog when use logging:v3.9.27

sh-4.2# cat ./etc/fluent/configs.d/dynamic/output-remote-syslog.conf
## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
remote_syslog 172.30.245.22
port 514
hostname logging-fluentd-2wrjv
facility local0
severity debug
</store>

<store>
@type syslog_buffered
remote_syslog 172.30.127.158
port 514
hostname logging-fluentd-2wrjv
facility local0
severity debug
</store>




[root@ip-172-18-5-188 log]# cat fluentd.log 
2018-05-02 07:43:30 -0400 [info]: fluent/supervisor.rb:471:read_config: reading config file path="/etc/fluent/fluent.conf"
2018-05-02 07:43:32 -0400 [warn]: fluent/buffer.rb:164:configure: 'block' action stops input process until the buffer full is resolved. Check your pipeline this action is fit or not
2018-05-02 07:43:41 -0400 [warn]: fluent/output.rb:381:rescue in try_flush: temporarily failed to flush the buffer. next_retry=2018-05-02 07:43:41 -0400 error_class="NoMethodError" error="undefined method `any?' for nil:NilClass" plugin_id="object:3f8262432618"
  2018-05-02 07:43:41 -0400 [warn]: /etc/fluent/plugin/out_syslog_buffered.rb:127:in `send_to_syslog'
  2018-05-02 07:43:41 -0400 [warn]: /etc/fluent/plugin/out_syslog_buffered.rb:90:in `block in write'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/plugin/buf_memory.rb:67:in `feed_each'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/plugin/buf_memory.rb:67:in `msgpack_each'
  2018-05-02 07:43:41 -0400 [warn]: /etc/fluent/plugin/out_syslog_buffered.rb:89:in `write'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/buffer.rb:354:in `write_chunk'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/buffer.rb:333:in `pop'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/output.rb:342:in `try_flush'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/output.rb:149:in `run'
2018-05-02 07:43:41 -0400 [warn]: fluent/output.rb:381:rescue in try_flush: temporarily failed to flush the buffer. next_retry=2018-05-02 07:43:42 -0400 error_class="NoMethodError" error="undefined method `any?' for nil:NilClass" plugin_id="object:3f825e6559f8"
  2018-05-02 07:43:41 -0400 [warn]: /etc/fluent/plugin/out_syslog_buffered.rb:127:in `send_to_syslog'
  2018-05-02 07:43:41 -0400 [warn]: /etc/fluent/plugin/out_syslog_buffered.rb:90:in `block in write'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/plugin/buf_memory.rb:67:in `feed_each'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/plugin/buf_memory.rb:67:in `msgpack_each'
  2018-05-02 07:43:41 -0400 [warn]: /etc/fluent/plugin/out_syslog_buffered.rb:89:in `write'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/buffer.rb:354:in `write_chunk'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/buffer.rb:333:in `pop'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/output.rb:342:in `try_flush'
  2018-05-02 07:43:41 -0400 [warn]: /usr/share/gems/gems/fluentd-0.12.42/lib/fluent/output.rb:149:in `run'
2018-05-02 07:43:41 -0400 [warn]: fluent/output.rb:381:rescue in try_flush: temporarily failed to flush the buffer. next_retry=2018-05-02 07:43:43 -0400 error_class="NoMethodError" error="undefined method `any?' for nil:NilClass" plugin_id="object:3f8262432618"

Comment 19 Noriko Hosoi 2018-05-02 18:16:08 UTC
Thanks for the failed case, @Anping!  Fixing it now...

Comment 21 openshift-github-bot 2018-05-08 19:52:26 UTC
Commits pushed to master at https://github.com/openshift/origin-aggregated-logging

https://github.com/openshift/origin-aggregated-logging/commit/130ad64bb6f76961cfb87f840ad680623e5c5690
Bug 1515715 - [RFE]No record generator information in rsyslog server

Fixing a test failure -- no tag_key

https://github.com/openshift/origin-aggregated-logging/commit/8d9dca8b1a8d368dc86a17a7964cc1eb6c2481ae
Merge pull request #1134 from nhosoi/bz1515715

Bug 1515715 - [RFE]No record generator information in rsyslog server

Comment 25 openshift-github-bot 2018-06-20 15:06:26 UTC
Commits pushed to master at https://github.com/openshift/origin-aggregated-logging

https://github.com/openshift/origin-aggregated-logging/commit/e1f481b316cfaa9b18bbcf01b416db41bc8ca63e
Bug 1515715 - [RFE]No record generator information in rsyslog server

Fixing a test failure -- no tag_key

https://github.com/openshift/origin-aggregated-logging/commit/e49bca4a23869ac757bb81efcb4db0b92d0a7539
Merge pull request #1149 from openshift-cherrypick-robot/cherry-pick-1134-to-es5.x

[es5.x] Bug 1515715 - [RFE]No record generator information in rsyslog server

Comment 27 Anping Li 2018-07-26 12:44:56 UTC
With openshift3/logging-fluentd/images/v3.9.38-1. I still couldn't see the rsyslog generator information. 


1) fluentd + container logs

The pod name wasn't in the rsyslog. In my testing, it should be pod/logging-curator-1-5bzjp-debug,

Debug line with all properties:
FROMHOST: 'ip-10-2-4-1.ec2.internal', fromhost-ip: '10.2.4.1', HOSTNAME: 'logging-fluentd-6dcw6', PRI: 135,
syslogtag 'output_tag:', programname: 'output_tag', APP-NAME: 'output_tag', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Jul 26 08:34:46', STRUCTURED-DATA: '-',
msg: ' dockermessageanli#015'
escaped msg: ' dockermessageanli#015'
inputname: imptcp rawmsg: '<135>Jul 26 08:34:46 logging-fluentd-6dcw6 output_tag: dockermessageanli#015'


2) fluentd + journald logs

No the origin SYSLOG_PID,  TAG


#logger -i -p local4.info -t testTag0 testMessage0
{ "__CURSOR" : "s=3329e7254a4b4de8a98c1969391f6610;i=4c54fb;b=75c72e5b1c44473aac9217ac331618d9;m=8a9d08fad;t=571e6543a24c3;x=e6f5643f94918498", "__REALTIME_TIMESTAMP" : "1532608953066691", "__MONOTONIC_TIMESTAMP" : "37208756141", "_BOOT_ID" : "75c72e5b1c44473aac9217ac331618d9", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_CAP_EFFECTIVE" : "1fffffffff", "_MACHINE_ID" : "067b093f6b6247e3aef97447db56fea2", "_HOSTNAME" : "ip-172-18-13-62.ec2.internal", "_TRANSPORT" : "syslog", "SYSLOG_FACILITY" : "20", "SYSLOG_IDENTIFIER" : "testTag0", "SYSLOG_PID" : "38385", "MESSAGE" : "testMessage0", "_PID" : "38385", "_COMM" : "logger", "_AUDIT_SESSION" : "32", "_AUDIT_LOGINUID" : "0", "_SYSTEMD_CGROUP" : "/", "_SELINUX_CONTEXT" : "unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "_SOURCE_REALTIME_TIMESTAMP" : "1532608953066083" }



# In rsyslog:

Debug line with all properties:
FROMHOST: 'ip-10-2-0-1.ec2.internal', fromhost-ip: '10.2.0.1', HOSTNAME: 'logging-fluentd-hzg7f', PRI: 135,
syslogtag 'output_tag:', programname: 'output_tag', APP-NAME: 'output_tag', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Jul 26 08:25:02', STRUCTURED-DATA: '-',
msg: ' testMessage0'
escaped msg: ' testMessage0'
inputname: imptcp rawmsg: '<135>Jul 26 08:25:02 logging-fluentd-hzg7f output_tag: testMessage0'

Comment 28 Noriko Hosoi 2018-07-26 16:37:55 UTC
Hi @Anping, the patch should be in v3.9.22-2 and newer.

Could you post your output-remote-syslog.conf?  I'm interested in the use_record and tag_key values.

Regarding the pod name, it is supposed to be logged in the message as 'pod_name=<POD_NAME>'...

FYI, you can find the test case in the upstream CI tests:
  origin-aggregated-logging/test/remote-syslog.sh
  title="Test 6, use rsyslogd on the node"
This is the config file generated in the CI test.
==> /etc/fluent/configs.d/dynamic/output-remote-syslog.conf <==
## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
@id remote-syslog-input
remote_syslog ip-172-18-12-241.ec2.internal
port 601
hostname logging-mux-7-4dsnh
tag_key ident,systemd.u.SYSLOG_IDENTIFIER,local1.err
facility local0
severity info
use_record true
</store>

Comment 29 Anping Li 2018-07-27 10:53:00 UTC
The log are sent by logging-fluentd in my testing.

## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
remote_syslog 172.31.8.104
port 514
hostname logging-fluentd-lznmf
facility local0
severity debug
</store>

<store>
@type syslog_buffered
remote_syslog 172.31.65.236
port 514
hostname logging-fluentd-lznmf
facility local0
severity debug

Comment 30 Anping Li 2018-07-27 11:10:13 UTC
For logging-mux, I get similar configure file

## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
remote_syslog 172.31.138.67
port 514
hostname logging-mux-3-7bcl7
facility local0
severity debug
</store>

<store>
@type syslog_buffered
remote_syslog 172.31.47.14
port 514
hostname logging-mux-3-7bcl7
facility local0
severity debug
</store>

Comment 31 Anping Li 2018-07-27 11:13:54 UTC
logging-fluentd/images/v3.9.38-1

sh-4.2# gem list |grep sys 
fluent-plugin-remote-syslog (1.1)
fluent-plugin-systemd (0.0.9)
syslog_protocol (0.9.2)
systemd-journal (1.3.1)

Comment 32 Noriko Hosoi 2018-07-27 17:11:20 UTC
Thanks for the config file, Anping.

Could you add "use_record true" to your config and see if it changes the behaviour?  It could be one by:
oc set env daemonset/logging-fluentd OR dc/logging-mux REMOTE_SYSLOG_USE_RECORD=true

Then, add "tag_key ident,systemd.u.SYSLOG_IDENTIFIER" and repeat the test?
oc set env daemonset/logging-fluentd OR dc/logging-mux REMOTE_SYSLOG_TAG_KEY='ident,systemd.u.SYSLOG_IDENTIFIER

Please see also "Doc Text" of this bug.  If you could review it and provide your inputs to improve it, I'd greatly appreciate it.

Comment 33 Anping Li 2018-07-31 09:33:28 UTC
The tag_key can be added with the variable

sh-4.2# cat ./etc/fluent/configs.d/dynamic/output-remote-syslog.conf
## This file was generated by generate-syslog-config.rb
<store>
@type syslog_buffered
remote_syslog 172.31.70.146
port 514
hostname logging-fluentd-czklb
tag_key ident,systemd.u.SYSLOG_IDENTIFIER

facility local0
severity debug
</store>

The Journald log looks well


Debug line with all properties:
FROMHOST: 'ip-10-2-0-1.ec2.internal', fromhost-ip: '10.2.0.1', HOSTNAME: 'logging-fluentd-7tkc2', PRI: 135,
syslogtag 'anlitag:', programname: 'anlitag', APP-NAME: 'anlitag', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Jul 31 05:18:24', STRUCTURED-DATA: '-',
msg: ' anlimessagezzz29'
escaped msg: ' anlimessagezzz29'
inputname: imptcp rawmsg: '<135>Jul 31 05:18:24 logging-fluentd-7tkc2 anlitag: anlimessagezzz29'

Comment 34 Anping Li 2018-07-31 09:42:29 UTC
Two issue:
1. How can I enable tag_key for the REMOTE_SYSLOG_HOST_BACKUP

       - name: USE_REMOTE_SYSLOG
          value: "true"
        - name: REMOTE_SYSLOG_HOST
          value: 172.31.70.146
        - name: REMOTE_SYSLOG_HOST_BACKUP
          value: 172.31.173.5
        - name: REMOTE_SYSLOG_PORT_BACKUP
          value: "514"
        - name: REMOTE_SYSLOG_TAG_KEY
          value: |
            ident,systemd.u.SYSLOG_IDENTIFIER
2. No pod_name/uuid when use docker json_file logs. A message is as below. I think the programname should be the pod_name.


Debug line with all properties:
FROMHOST: 'ip-10-2-2-1.ec2.internal', fromhost-ip: '10.2.2.1', HOSTNAME: 'logging-fluentd-czklb', PRI: 135,
syslogtag 'output_tag:', programname: 'output_tag', APP-NAME: 'output_tag', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Jul 31 05:24:18', STRUCTURED-DATA: '-',
msg: ' 10.2.2.1 - - [31/Jul/2018:09:23:54 +0000] "GET /healthz HTTP/2.0" 200 0 "" "kube-probe/1.9"'
escaped msg: ' 10.2.2.1 - - [31/Jul/2018:09:23:54 +0000] "GET /healthz HTTP/2.0" 200 0 "" "kube-probe/1.9"'
inputname: imptcp rawmsg: '<135>Jul 31 05:24:18 logging-fluentd-czklb output_tag: 10.2.2.1 - - [31/Jul/2018:09:23:54 +0000] "GET /healthz HTTP/2.0" 200 0 "" "kube-probe/1.9"'
$!:
$.:
$/:

Comment 35 Noriko Hosoi 2018-07-31 19:00:34 UTC
> 1. How can I enable tag_key for the REMOTE_SYSLOG_HOST_BACKUP
It means you want to use the value of REMOTE_SYSLOG_HOST_BACKUP as tag_key?
Could you try setting hostname to tag_key?
  <store>
  @type syslog_buffered
  remote_syslog 172.31.70.146
  port 514
  hostname logging-fluentd-czklb
  tag_key hostname
  use_record true
  ....
  </store>

> 2. No pod_name/uuid when use docker json_file logs. A message is as below. I think the programname should be the pod_name.

Unfortunately, remote-syslog plugin is not implemented that way.  It does not update the programname value.  The PR for this bug is adding the namespace name, container name and pod name to the message.  This is an example from upstream CI test remote-syslog.sh.

[2018-07-31T17:57:28.574+0000] 6,16,Jul 31 17:57:28,ip-172-18-7-210.ec2.internal,output_ops_tag:, namespace_name=openshift-logging, container_name=kibana, pod_name=logging-kibana-1-wvpbh, message=GET /deee3211c1c1454288923ec4eafe09f7 404 2ms - 9.0B

Comment 36 Anping Li 2018-07-31 23:42:01 UTC
1. How can I enable tag_key for the REMOTE_SYSLOG_HOST_BACKUP
No, I want to add REMOTE_SYSLOG_TAG_KEY for the REMOTE_SYSLOG_HOST_BACKUP(the second rsyslog server). Could I use the Environment variable?


2.  No pod_name/uuid when use docker json_file logs
It seems the "tag_key ident,systemd.u.SYSLOG_IDENTIFIER" only works for journald logs.  I couldn't find the pod-name/namespace name.  What tag_key should I use to  colllect json-file container logs?

Comment 37 Noriko Hosoi 2018-08-01 00:13:42 UTC
(In reply to Anping Li from comment #36)
> 1. How can I enable tag_key for the REMOTE_SYSLOG_HOST_BACKUP
> No, I want to add REMOTE_SYSLOG_TAG_KEY for the
> REMOTE_SYSLOG_HOST_BACKUP(the second rsyslog server). Could I use the
> Environment variable?

Well, now I'm confused...  May I ask where the REMOTE_SYSLOG_HOST_BACKUP came from?  Environment variables starting with REMOTE_SYSLOG_ are defined here.

https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/generate_syslog_config.rb#L16-L24

There is no HOST_BACKUP nor PORT_BACKUP...

> 2.  No pod_name/uuid when use docker json_file logs
> It seems the "tag_key ident,systemd.u.SYSLOG_IDENTIFIER" only works for
> journald logs.  

I don't think so.  When I ran this test, the docker log driver was json-file and this log is from an application pod (In this example, it is a kibana pod :).

[2018-07-31T17:57:28.574+0000] 6,16,Jul 31 17:57:28,ip-172-18-7-210.ec2.internal,output_ops_tag:, namespace_name=openshift-logging, container_name=kibana, pod_name=logging-kibana-1-wvpbh, message=GET /deee3211c1c1454288923ec4eafe09f7 404 2ms - 9.0B

> I couldn't find the pod-name/namespace name.  What tag_key
> should I use to  colllect json-file container logs?

When I ran it, this series of values were set to tag_key, I believe.
REMOTE_SYSLOG_TAG_KEY='ident,systemd.u.SYSLOG_IDENTIFIER,local1.err'

If you are interested in, please take a look at this part of the CI-test.
https://github.com/openshift/origin-aggregated-logging/blob/master/test/remote-syslog.sh#L293-L413

Comment 38 Anping Li 2018-08-01 01:39:07 UTC
@Noriko

https://docs.openshift.com/container-platform/3.9/install_config/aggregate_logging.html#sending-logs-to-external-rsyslog


- name: REMOTE_SYSLOG_HOST 
  value: host1
- name: REMOTE_SYSLOG_HOST_BACKUP
  value: host2
- name: REMOTE_SYSLOG_PORT_BACKUP
  value: 5555

Comment 39 Noriko Hosoi 2018-08-01 18:56:10 UTC
(In reply to Anping Li from comment #38)
> @Noriko
> 
> https://docs.openshift.com/container-platform/3.9/install_config/
> aggregate_logging.html#sending-logs-to-external-rsyslog
> 
> 
> - name: REMOTE_SYSLOG_HOST 
>   value: host1
> - name: REMOTE_SYSLOG_HOST_BACKUP
>   value: host2
> - name: REMOTE_SYSLOG_PORT_BACKUP
>   value: 5555

Thanks, @Anping.  I found a trick!

First of all, the extension "_BACKUP" could be anything.  (That's why I could not find "BACKUP" in the source code... :)

oc set env dc/logging-mux REMOTE_SYSLOG_HOST_XXX=host1 REMOTE_SYSLOG_PORT_XXX=1111

The above command line adds this config to output-remote-syslog.conf:
<store>
@type syslog_buffered
@id remote-syslog-input
remote_syslog host1
port 1111
...
</store>

Please note that if you replace REMOTE_SYSLOG_PORT_XXX=1111 with REMOTE_SYSLOG_PORT_YYY=2222,  the setting would be ignored.  That's being said, if you add the same extension to other environment variables, they are applied as well.

Here we go...
oc set env dc/logging-mux USE_REMOTE_SYSLOG=true REMOTE_SYSLOG_HOST_BACKUP=host9 REMOTE_SYSLOG_TAG_KEY_BACKUP='ident,systemd.u.SYSLOG_IDENTIFIER' REMOTE_SYSLOG_PORT_BACKUP=9999

<store>
@type syslog_buffered
@id remote-syslog-input
remote_syslog host9
port 9999
tag_key ident,systemd.u.SYSLOG_IDENTIFIER
...
</store>

Comment 40 Anping Li 2018-08-02 11:32:12 UTC
Verified and pass.

1. Configure fluentd or mux to use use rsyslog and rsyslog_back
oc set env dc/logging-mux USE_REMOTE_SYSLOG=true 
oc set env dc/logging-mux REMOTE_SYSLOG_HOST=172.30.74.3 
oc set env dc/logging-mux REMOTE_SYSLOG_TAG_KEY='ident,systemd.u.SYSLOG_IDENTIFIER'
oc set env dc/logging-mux REMOTE_SYSLOG_USE_RECORD=true 
oc set env dc/logging-mux REMOTE_SYSLOG_SEVERITY=info


oc set env dc/logging-mux USE_REMOTE_SYSLOG_BACKUP=true 
oc set env dc/logging-mux REMOTE_SYSLOG_HOST_BACKUP=172.30.74.3 
oc set env dc/logging-mux REMOTE_SYSLOG_TAG_KEY_BACKUP='ident,systemd.u.SYSLOG_IDENTIFIER'
oc set env dc/logging-mux REMOTE_SYSLOG_USE_RECORD_BACKUP=true 
oc set env dc/logging-mux REMOTE_SYSLOG_SEVERITY_BACKUP=info



2. The value in output-remote-syslog.conf
<store>
@type syslog_buffered
remote_syslog 172.30.74.3
port 514
hostname logging-mux-9-qgvc8
tag_key ident,systemd.u.SYSLOG_IDENTIFIER
facility local0
severity info
use_record true
</store>

<store>
@type syslog_buffered
remote_syslog 172.30.139.2
port 514
hostname logging-mux-9-qgvc8
tag_key ident,systemd.u.SYSLOG_IDENTIFIER
facility local0
severity error
use_record true
</store>

3.1 The system log example:
Debug line with all properties:
FROMHOST: '10.128.0.1', fromhost-ip: '10.128.0.1', HOSTNAME: 'qe-310node-registry-router-2', PRI: 30,
syslogtag 'atomic-openshift-node:', programname: 'atomic-openshift-node', APP-NAME: 'atomic-openshift-node', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Aug  2 11:20:48', STRUCTURED-DATA: '-',
msg: ' :KUBE-SEP-JGR2KZDCL5X2ZBXD - [0:0]'
escaped msg: ' :KUBE-SEP-JGR2KZDCL5X2ZBXD - [0:0]'
inputname: imptcp rawmsg: '<30>Aug  2 11:20:48 qe-310node-registry-router-2 atomic-openshift-node: :KUBE-SEP-JGR2KZDCL5X2ZBXD - [0:0]'

3.2 The container logs example:
Debug line with all properties:
FROMHOST: '10.128.0.1', fromhost-ip: '10.128.0.1', HOSTNAME: 'qe-310node-registry-router-1', PRI: 134,
syslogtag 'output_tag:', programname: 'output_tag', APP-NAME: 'output_tag', PROCID: '-', MSGID: '-',
TIMESTAMP: 'Aug  2 11:21:50', STRUCTURED-DATA: '-',
msg: ' namespace_name=install-test, container_name=nodejs-mongodb-example, pod_name=nodejs-mongodb-example-1-k5wrl-debug, message=anlidocker111184#015'
escaped msg: ' namespace_name=install-test, container_name=nodejs-mongodb-example, pod_name=nodejs-mongodb-example-1-k5wrl-debug, message=anlidocker111184#015'
inputname: imptcp rawmsg: '<134>Aug  2 11:21:50 qe-310node-registry-router-1 output_tag: namespace_name=install-test, container_name=nodejs-mongodb-example, pod_name=nodejs-mongodb-example-1-k5wrl-debug, message=anlidocker111184#015'
$!:

Comment 42 errata-xmlrpc 2018-08-09 22:13:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2335


Note You need to log in before you can comment on or make changes to this bug.