Bug 1731301 - `oc log $fluentd-pod` doesn't show any logs after setting `LOGGING_FILE_PATH=console`.
Summary: `oc log $fluentd-pod` doesn't show any logs after setting `LOGGING_FILE_PATH=...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.2.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-19 01:03 UTC by Qiaoling Tang
Modified: 2019-07-25 06:17 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-25 06:17:04 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Qiaoling Tang 2019-07-19 01:03:22 UTC
Description of problem:
Set LOGGING_FILE_PATH=console in fluentd pod's env, then wait for a few minutes, check fluentd pod logs, no log.

The pods are running for 16 hours, and log_level is set to trace.

$ oc get pod
NAME                                            READY   STATUS      RESTARTS   AGE
cluster-logging-operator-d57b7d7ff-d8fgx        1/1     Running     0          18h
curator-1563497400-zql6v                        0/1     Completed   0          4m32s
elasticsearch-cdm-szj894kt-1-6d4d858bc7-mxrlw   2/2     Running     0          17h
elasticsearch-cdm-szj894kt-2-56fdbc45c7-2xk2f   2/2     Running     0          18h
fluentd-52ncx                                   1/1     Running     0          16h
fluentd-ctqdq                                   1/1     Running     0          16h
fluentd-h9m7m                                   1/1     Running     0          16h
fluentd-lxhtl                                   1/1     Running     0          16h
fluentd-r2dcz                                   1/1     Running     0          16h
fluentd-w25t4                                   1/1     Running     0          16h
kibana-5cbd5cc9c9-h9hwd                         2/2     Running     0          18h
$ oc exec fluentd-h9m7m env |grep LOGGING
LOGGING_FILE_PATH=console
LOGGING_FILE_AGE=10
LOGGING_FILE_SIZE=1024000

$ oc get cm fluentd -oyaml
apiVersion: v1
data:
  fluent.conf: |
    # This file is the fluentd configuration entrypoint. Edit with care.

    #@include configs.d/openshift/system.conf
    <system>
      @log_level trace
    </system>


Version-Release number of selected component (if applicable):
ose-logging-fluentd-v4.2.0-201907141446
ose-cluster-logging-operator-v4.2.0-201907161019

How reproducible:
Always

Steps to Reproduce:
1. Deploy logging using fluentd as log collector
2. set log_level trace in fluentd configmap
3. oc set env ds/fluentd LOGGING_FILE_PATH=console
4. run `oc logs $fluentd-pod` to view fluentd pod log 

Actual results:
`oc logs $fluentd-pod` doesn't show anything.

Expected results:
Should be able to view logs by `oc logs $fluentd-pod` after setting LOGGING_FILE_PATH=console

Additional info:
If use the default value, the fluentd logs could be viewed by `oc exec $fluent-pod -- logs`

Comment 1 Qiaoling Tang 2019-07-19 01:07:22 UTC
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.2.0-0.nightly-2019-07-18-032637   True        False         18h     Cluster version is 4.2.0-0.nightly-2019-07-18-032637

Comment 2 Noriko Hosoi 2019-07-19 02:16:02 UTC
I cannot reproduce the problem.
$ oc logs fluentd-8965g | tail -n 4
2019-07-19 01:44:22 +0000 [warn]: [elasticsearch-apps] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=43.58411700000579 slow_flush_log_threshold=20.0 plugin_id="elasticsearch-apps"
2019-07-19 01:44:50 +0000 [warn]: [elasticsearch-apps] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=41.06120014299813 slow_flush_log_threshold=20.0 plugin_id="elasticsearch-apps"
2019-07-19 01:45:23 +0000 [warn]: [elasticsearch-apps] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=60.60765783599345 slow_flush_log_threshold=20.0 plugin_id="elasticsearch-apps"
2019-07-19 01:45:33 +0000 [warn]: [elasticsearch-apps] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=41.647725964998244 slow_flush_log_threshold=20.0 plugin_id="elasticsearch-apps"

Please note that I'm running my local build, but the fluentd logging code has no changes.

I noticed an odd thing.  When I switched LOGGING_FILE_PATH with the command line `oc set env ds/fluentd LOGGING_FILE_PATH=console`, `oc logs $fluentd_pod` returned nothing as described in this bug.  So, I thought I could duplicate the bug.  Then, I rsh'ed to one of the fluentd pod and looking around for a while, and exited from the pod and reran `oc logs $fluentd_pod`.  Then, `oc logs $fluentd_pod` returned logs as shown above.  It looks to me there's some delay???  But I have no explanation for it.

@Qiaoling, I wonder if there is similar delay in your test env, as well?

Comment 3 Qiaoling Tang 2019-07-19 02:28:39 UTC
Yes, I can see some logs now, and I didn't do anything. 

My pods has been running for about 17 hours, but I can only see a little amount of the logs.

$ oc logs fluentd-52ncx
2019-07-18 11:13:13 +0000 [warn]: [elasticsearch-apps] failed to flush the buffer. retry_time=0 next_retry_seconds=2019-07-18 11:13:14 +0000 chunk="58df2b2118e717cd33f70a7b18775cb5" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): [401] Authentication finally failed"
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:757:in `rescue in send_bulk'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:732:in `send_bulk'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:627:in `block in write'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:626:in `each'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-3.5.2/lib/fluent/plugin/out_elasticsearch.rb:626:in `write'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.5.1/lib/fluent/plugin/output.rb:1125:in `try_flush'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.5.1/lib/fluent/plugin/output.rb:1425:in `flush_thread_run'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.5.1/lib/fluent/plugin/output.rb:454:in `block (2 levels) in start'
  2019-07-18 11:13:13 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.5.1/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2019-07-18 11:13:14 +0000 [warn]: [elasticsearch-apps] retry succeeded. chunk_id="58df2b220d9a8fb1fb751e8db8785bc5"
[qitang@wlc-trust-182 aws]$ oc logs fluentd-r2dcz
2019-07-19 00:45:15 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-apiserver-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-apiserver_setup-699f7f473d5f5cd0b9378ce5b10bd5c6a2e07bb6175b3a632be8260d158d5158.log:default" location=nil tag="kubernetes.var.log.containers.kube-apiserver-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-apiserver_setup-699f7f473d5f5cd0b9378ce5b10bd5c6a2e07bb6175b3a632be8260d158d5158.log" time=2019-07-19 00:45:15.748477769 +0000 record={"time"=>"2019-07-19T00:43:27.850258684+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Fixing audit permissions.Waiting for port :6443 to be released."}
2019-07-19 00:48:15 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-controller-manager-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-caf0d1542fa491e955ad1c70724d43cfa4e6ab83f6b87a067cbc29ce2feaae20.log:default" location=nil tag="kubernetes.var.log.containers.kube-controller-manager-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-caf0d1542fa491e955ad1c70724d43cfa4e6ab83f6b87a067cbc29ce2feaae20.log" time=2019-07-19 00:48:15.678040412 +0000 record={"time"=>"2019-07-19T00:46:39.740122367+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10257 to be released."}
2019-07-19 00:50:15 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-apiserver-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-apiserver_setup-7f084be2f055e29b5fc5340483feca9a7033090f894d6b4e57a468f90250b126.log:default" location=nil tag="kubernetes.var.log.containers.kube-apiserver-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-apiserver_setup-7f084be2f055e29b5fc5340483feca9a7033090f894d6b4e57a468f90250b126.log" time=2019-07-19 00:50:15.717637829 +0000 record={"time"=>"2019-07-19T00:48:42.059733853+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Fixing audit permissions.Waiting for port :6443 to be released."}

Comment 4 Qiaoling Tang 2019-07-19 02:39:26 UTC
Then I set LOGGING_FILE_PATH back to default, I couldn't view the latest logs in fluentd pod either:

$ oc get pod
NAME                                            READY   STATUS      RESTARTS   AGE
cluster-logging-operator-d57b7d7ff-d8fgx        1/1     Running     0          19h
curator-1563503400-cttzm                        0/1     Completed   0          6m30s
elasticsearch-cdm-szj894kt-1-6d4d858bc7-mxrlw   2/2     Running     0          19h
elasticsearch-cdm-szj894kt-2-56fdbc45c7-2xk2f   2/2     Running     0          19h
fluentd-5c9dq                                   1/1     Running     0          18m
fluentd-8tjjt                                   1/1     Running     0          21m
fluentd-kxnjd                                   1/1     Running     0          17m
fluentd-pqzjt                                   1/1     Running     0          19m
fluentd-q2hhv                                   1/1     Running     0          18m
fluentd-vrb6g                                   1/1     Running     0          20m
kibana-5cbd5cc9c9-h9hwd                         2/2     Running     0          19h
[qitang@wlc-trust-182 aws]$ oc exec fluentd-5c9dq -- logs
2019-07-18 07:58:17 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-controller-manager-ip-10-0-154-94.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-01430031a43f2cb899adf81ebd984e03605ca45ef0d7be42590227f0ddd270ac.log:default" location=nil tag="kubernetes.var.log.containers.kube-controller-manager-ip-10-0-154-94.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-01430031a43f2cb899adf81ebd984e03605ca45ef0d7be42590227f0ddd270ac.log" time=2019-07-18 07:58:17.009604294 +0000 record={"time"=>"2019-07-18T05:52:37.574280862+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10257 to be released."}
2019-07-18 07:58:18 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.openshift-kube-scheduler-ip-10-0-154-94.eu-west-2.compute.internal_openshift-kube-scheduler_wait-for-host-port-374856cd55b3a2bba05595c43ca0ca01edc3a6289056fd343386a726c205dd7a.log:default" location=nil tag="kubernetes.var.log.containers.openshift-kube-scheduler-ip-10-0-154-94.eu-west-2.compute.internal_openshift-kube-scheduler_wait-for-host-port-374856cd55b3a2bba05595c43ca0ca01edc3a6289056fd343386a726c205dd7a.log" time=2019-07-18 07:58:18.010392244 +0000 record={"time"=>"2019-07-18T05:53:56.229916843+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10259 and :10251 to be released."}
2019-07-18 07:58:18 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-apiserver-ip-10-0-154-94.eu-west-2.compute.internal_openshift-kube-apiserver_setup-73acb82fb3e59968b16b9a60b21f88f5b3414fce9289baf5f3b57fa0593521c7.log:default" location=nil tag="kubernetes.var.log.containers.kube-apiserver-ip-10-0-154-94.eu-west-2.compute.internal_openshift-kube-apiserver_setup-73acb82fb3e59968b16b9a60b21f88f5b3414fce9289baf5f3b57fa0593521c7.log" time=2019-07-18 07:58:18.010392244 +0000 record={"time"=>"2019-07-18T06:10:04.389278117+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Fixing audit permissions.Waiting for port :6443 to be released."}
[qitang@wlc-trust-182 aws]$ oc exec fluentd-8tjjt -- logs
2019-07-18 07:58:25 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.openshift-kube-scheduler-ip-10-0-135-189.eu-west-2.compute.internal_openshift-kube-scheduler_wait-for-host-port-eb93fbdb73cc0c7cb3ddcd4361880ff11b9e95aabd6be14eef57659d084fb197.log:default" location=nil tag="kubernetes.var.log.containers.openshift-kube-scheduler-ip-10-0-135-189.eu-west-2.compute.internal_openshift-kube-scheduler_wait-for-host-port-eb93fbdb73cc0c7cb3ddcd4361880ff11b9e95aabd6be14eef57659d084fb197.log" time=2019-07-18 07:58:25.048563540 +0000 record={"time"=>"2019-07-18T05:52:49.928959697+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10259 and :10251 to be released."}
2019-07-18 07:58:32 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-controller-manager-ip-10-0-135-189.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-befd0748e5323189a244038df0560ede672dab117c9994eac53711ec82babab1.log:default" location=nil tag="kubernetes.var.log.containers.kube-controller-manager-ip-10-0-135-189.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-befd0748e5323189a244038df0560ede672dab117c9994eac53711ec82babab1.log" time=2019-07-18 07:58:32.048534663 +0000 record={"time"=>"2019-07-18T05:52:12.173265077+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10257 to be released."}
2019-07-18 07:58:33 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-apiserver-ip-10-0-135-189.eu-west-2.compute.internal_openshift-kube-apiserver_setup-26c8beaafb85b52266a105147e26b45846bd7fc015a806366896d26cff46ab9d.log:default" location=nil tag="kubernetes.var.log.containers.kube-apiserver-ip-10-0-135-189.eu-west-2.compute.internal_openshift-kube-apiserver_setup-26c8beaafb85b52266a105147e26b45846bd7fc015a806366896d26cff46ab9d.log" time=2019-07-18 07:58:33.048128564 +0000 record={"time"=>"2019-07-18T06:13:19.808355997+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Fixing audit permissions.Waiting for port :6443 to be released."}
[qitang@wlc-trust-182 aws]$ oc exec fluentd-q2hhv -- logs
[qitang@wlc-trust-182 aws]$ oc exec fluentd-vrb6g -- logs
[qitang@wlc-trust-182 aws]$ oc exec fluentd-kxnjd -- logs
2019-07-18 07:59:09 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-controller-manager-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-56643bbdd084f3d485e5aad34563959203f0268279043e7232c3ea32a35f73f5.log:default" location=nil tag="kubernetes.var.log.containers.kube-controller-manager-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-56643bbdd084f3d485e5aad34563959203f0268279043e7232c3ea32a35f73f5.log" time=2019-07-18 07:59:09.835491640 +0000 record={"time"=>"2019-07-18T05:51:40.176022693+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10257 to be released."}
2019-07-18 07:59:09 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.openshift-kube-scheduler-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-scheduler_wait-for-host-port-9ddbdb29924c6ba67c67190b2a2d0aba37367547592facc8970c49ac9a460068.log:default" location=nil tag="kubernetes.var.log.containers.openshift-kube-scheduler-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-scheduler_wait-for-host-port-9ddbdb29924c6ba67c67190b2a2d0aba37367547592facc8970c49ac9a460068.log" time=2019-07-18 07:59:09.835491640 +0000 record={"time"=>"2019-07-18T05:51:41.057628834+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Waiting for port :10259 and :10251 to be released."}
2019-07-18 07:59:13 +0000 [warn]: dump an error event: error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush: kubernetes.var.log.containers.kube-apiserver-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-apiserver_setup-8f1a06bb1230be6f3dfc380ec64b2a442872c420d8e6815c75d7928abf8f2862.log:default" location=nil tag="kubernetes.var.log.containers.kube-apiserver-ip-10-0-163-112.eu-west-2.compute.internal_openshift-kube-apiserver_setup-8f1a06bb1230be6f3dfc380ec64b2a442872c420d8e6815c75d7928abf8f2862.log" time=2019-07-18 07:59:13.835826695 +0000 record={"time"=>"2019-07-18T06:11:40.347247437+00:00", "stream"=>"stdout", "logtag"=>"P", "log"=>"Fixing audit permissions.Waiting for port :6443 to be released."}
[qitang@wlc-trust-182 aws]$ oc logs fluentd-kxnjd
=============================
Fluentd logs have been redirected to: /var/log/fluentd/fluentd.log
If you want to print out the logs, use command:
oc exec <pod_name> -- logs
=============================

Comment 5 Rich Megginson 2019-07-19 02:42:40 UTC
So is this a bug?

Comment 6 Qiaoling Tang 2019-07-19 05:35:15 UTC
I thought there may have some logs after fluentd pod recreated, but seems there is no log.

I deleted the ES pods, then I can see fluentd pod logs by `oc logs $fluentd-pod-log`.

Will keep an eye on this.

Comment 7 Rich Megginson 2019-07-24 16:30:30 UTC
(In reply to Qiaoling Tang from comment #4)
> Then I set LOGGING_FILE_PATH back to default,

how?  did you do this?

oc set env ds/fluentd LOGGING_FILE_PATH-

?

> I couldn't view the latest
> logs in fluentd pod either:
> 
> $ oc get pod
> NAME                                            READY   STATUS      RESTARTS
> AGE
> cluster-logging-operator-d57b7d7ff-d8fgx        1/1     Running     0       
> 19h
> curator-1563503400-cttzm                        0/1     Completed   0       
> 6m30s
> elasticsearch-cdm-szj894kt-1-6d4d858bc7-mxrlw   2/2     Running     0       
> 19h
> elasticsearch-cdm-szj894kt-2-56fdbc45c7-2xk2f   2/2     Running     0       
> 19h
> fluentd-5c9dq                                   1/1     Running     0       
> 18m
> fluentd-8tjjt                                   1/1     Running     0       
> 21m
> fluentd-kxnjd                                   1/1     Running     0       
> 17m
> fluentd-pqzjt                                   1/1     Running     0       
> 19m
> fluentd-q2hhv                                   1/1     Running     0       
> 18m
> fluentd-vrb6g                                   1/1     Running     0       
> 20m
> kibana-5cbd5cc9c9-h9hwd                         2/2     Running     0       
> 19h
> [qitang@wlc-trust-182 aws]$ oc exec fluentd-5c9dq -- logs
> 2019-07-18 07:58:17 +0000 [warn]: dump an error event:
> error_class=Fluent::Plugin::ConcatFilter::TimeoutError error="Timeout flush:
> kubernetes.var.log.containers.kube-controller-manager-ip-10-0-154-94.eu-west-
> 2.compute.internal_openshift-kube-controller-manager_wait-for-host-port-
> 01430031a43f2cb899adf81ebd984e03605ca45ef0d7be42590227f0ddd270ac.log:
<snip>
> [qitang@wlc-trust-182 aws]$ oc logs fluentd-kxnjd
> =============================
> Fluentd logs have been redirected to: /var/log/fluentd/fluentd.log
> If you want to print out the logs, use command:
> oc exec <pod_name> -- logs
> =============================

I'm still not sure what is the bug here - what is it that you are expecting to see?

Comment 8 Qiaoling Tang 2019-07-25 06:17:04 UTC
I was thinking when fluentd pod start, there may have some initial logs, so after setting LOGGING_FILE_PATH=console, once the fluentd pod start, I would be able to view some logs by `oc logs $fluentd-pod` right away.


I can confirm this is not a bug. Close as NOTABUG.


Note You need to log in before you can comment on or make changes to this bug.