Bug 1651884 - [DOCS] Could not find fluentd pod logs when LOGGING_FILE_PATH=console.
Summary: [DOCS] Could not find fluentd pod logs when LOGGING_FILE_PATH=console.
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Documentation
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.1.0
Assignee: Michael Burke
QA Contact: Qiaoling Tang
Vikram Goyal
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-21 06:17 UTC by Qiaoling Tang
Modified: 2019-03-12 14:24 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-02-19 14:03:31 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qiaoling Tang 2018-11-21 06:17:42 UTC
Description of problem:
Couldn't view fluentd pod logs when LOGGING_FILE_PATH=console.
# oc get pod
NAME                                        READY     STATUS    RESTARTS   AGE
cluster-logging-operator-7dd7c56766-mz99w   1/1       Running   0          4h
elasticsearch-clientmaster-0-1-0            1/1       Running   0          2h
elasticsearch-data-1-1-789c6d9944-w86th     1/1       Running   0          2h
elasticsearch-data-1-2-784ccf7869-25z5w     1/1       Running   0          3h
fluentd-77vwt                               1/1       Running   0          1h
fluentd-mb7xw                               1/1       Running   0          1h
fluentd-mtmj6                               1/1       Running   0          2h
fluentd-xtsv5                               1/1       Running   0          1h
kibana-747f7bc8c-wcp66                      2/2       Running   0          4h

# oc exec fluentd-mtmj6  env |grep LOGGING_FILE
LOGGING_FILE_PATH=console
LOGGING_FILE_AGE=10
LOGGING_FILE_SIZE=1024000

# oc logs fluentd-mtmj6
=============================
Fluentd logs have been redirected to: console
If you want to print out the logs, use command:
oc exec <pod_name> -- logs
=============================

execute `oc exec fluentd-mtmj6 -- logs`, the output contains some scripts:

# oc exec fluentd-mtmj6 -- logs
#!/bin/bash

export MERGE_JSON_LOG=${MERGE_JSON_LOG:-true}
CFG_DIR=/etc/fluent/configs.d
ENABLE_PROMETHEUS_ENDPOINT=${ENABLE_PROMETHEUS_ENDPOINT:-"true"}
OCP_OPERATIONS_PROJECTS=${OCP_OPERATIONS_PROJECTS:-"default openshift openshift- kube-"}
OCP_FLUENTD_TAGS=""
for p in ${OCP_OPERATIONS_PROJECTS}; do
    if [[ "${p}" == *- ]] ; then
      p="${p}*"
    fi
    OCP_FLUENTD_TAGS+=" **_${p}_**"
done
ocp_fluentd_files=$( grep -l %OCP_FLUENTD_TAGS% ${CFG_DIR}/* ${CFG_DIR}/*/* 2> /dev/null || : )
for file in ${ocp_fluentd_files} ; do
    sed -i -e "s/%OCP_FLUENTD_TAGS%/${OCP_FLUENTD_TAGS}/" $file
done

echo "============================="
echo "Fluentd logs have been redirected to: $LOGGING_FILE_PATH"
echo "If you want to print out the logs, use command:"
echo "oc exec <pod_name> -- logs"
echo "============================="

......

def create_default_file()
  file_name = "/etc/fluent/configs.d/dynamic/output-remote-syslog.conf"
  c = '## This file was generated by generate-syslog-config.rb'

  # NOTE WELL: you cannot add an @id to this plugin easily like
  # @id remote-syslog-input
  # this gives an error with fluentd 1.x because the generated file is
  # included from two separate places (if ops logging is enabled), and
  # therefore there are two completely different plugin configs each
  # with the same id
  # We'll have to generate two completely separate files each with a
  # unique id e.g. remote-syslog-input-apps or -infra
  @env_vars.each do |r|
  c <<
"
<store>
@type syslog_buffered
"
     r.each { |v|  c << "#{v[1]} #{v[2]}\n" unless !v[2] }
  c <<
"</store>
"
  end

  File.open(file_name, 'w') { |f| f.write(c) }
end

init_environment_vars()
create_default_file()
2018-11-20 22:34:44 -0500 [warn]: [elasticsearch-apps] Could not push logs to Elasticsearch, resetting connection and trying again. Connection reset by peer (Errno::ECONNRESET)


Version-Release number of selected component (if applicable):
ose-logging-fluentd-v4.0.0-0.63.0.0

How reproducible:
Always

Steps to Reproduce:
1. deploy EFK stack follow https://github.com/openshift/cluster-logging-operator#full-deploy
2. execute 'oc set ds/fluentd LOGGING_FILE_PATH=console'
3. wait until all the fluentd pods become running, check fluentd pod log

Actual results:
No fluentd pod logs shows when LOGGING_FILE_PATH=console

Expected results:
execute `oc logs $fluentd-pod` should show the fluentd pod log when 'LOGGING_FILE_PATH=console'

Additional info:

Comment 1 Rich Megginson 2018-11-22 03:13:38 UTC
By setting 'LOGGING_FILE_PATH=console', what are you trying to do?  Are you trying to make fluentd send its logs to stdout, so that you can view them with `oc logs $fluentd_pod`?  If so, you cannot do that - that is not supported - the value of LOGGING_FILE_PATH must be the absolute path to a file.  If this isn't clear from the docs, please convert this bz into a docs bz.

Comment 2 Qiaoling Tang 2018-11-22 03:25:00 UTC
(In reply to Rich Megginson from comment #1)
> By setting 'LOGGING_FILE_PATH=console', what are you trying to do?  Are you
> trying to make fluentd send its logs to stdout, so that you can view them
> with `oc logs $fluentd_pod`?  

Yes, this is what I intended to do.


> If so, you cannot do that - that is not supported - the value of 
> LOGGING_FILE_PATH must be the absolute path to a
> file.  If this isn't clear from the docs, please convert this bz into a docs
> bz.

I did this following https://github.com/openshift/origin-aggregated-logging/tree/master/fluentd#configuration .

Comment 3 Qiaoling Tang 2018-11-22 03:27:24 UTC
(In reply to Qiaoling Tang from comment #2)
> (In reply to Rich Megginson from comment #1)
> > By setting 'LOGGING_FILE_PATH=console', what are you trying to do?  Are you
> > trying to make fluentd send its logs to stdout, so that you can view them
> > with `oc logs $fluentd_pod`?  
> 
> Yes, this is what I intended to do.
> 
> 
> > If so, you cannot do that - that is not supported - the value of 
> > LOGGING_FILE_PATH must be the absolute path to a
> > file.  If this isn't clear from the docs, please convert this bz into a docs
> > bz.
> 
I did this according to
https://github.com/openshift/origin-aggregated-logging/tree/master/fluentd#configuration .

Comment 4 Jeff Cantrill 2018-11-23 22:18:08 UTC
Converting to a documentation bug as when value is set to 'console' the implication is logs go to STDOUT and are only viewable by 'oc logs'.  There is no intention to make the provided script able to interact with STDOUT logs.  In summary:

value: 'console' means must use 'oc logs'
value: '<FILEPATH>' means must use 'oc exect $fpod -- logs'

We should correct in the origin README but we should definitely clarify in the OKD documentation.

Comment 5 Michael Burke 2019-02-19 14:03:31 UTC
Fixed in OpenShift docs via https://github.com/openshift/openshift-docs/pull/12586/


Note You need to log in before you can comment on or make changes to this bug.