Bug 1711596 - Project logs do not appear in the projects.* indexes.
Summary: Project logs do not appear in the projects.* indexes.
Keywords:
Status: CLOSED DUPLICATE of bug 1722380
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.11.0
Hardware: x86_64
OS: Windows
unspecified
medium
Target Milestone: ---
: 3.11.z
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-18 21:40 UTC by arylwen
Modified: 2019-06-26 15:35 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-26 15:35:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logging-fluentd (9.03 MB, text/plain)
2019-05-18 23:38 UTC, arylwen
no flags Details
logging-fluentd with startup debug (8.89 MB, text/plain)
2019-05-19 22:00 UTC, arylwen
no flags Details
logging-fluentd with extended debug (9.26 MB, text/plain)
2019-05-20 00:50 UTC, arylwen
no flags Details

Description arylwen 2019-05-18 21:40:28 UTC
Description of problem:
Although journald generates the standard metadata, the log entries do not make it in the projects.*

Journald entry:

{ "__CURSOR" : "s=0ddfd5aa598a426f8accd9a046714784;i=1b0a;b=b34b5a1a5ad94eeab67652882ed98344;m=b700192;t=5892e63067a4b;x=d6c9fa5da61e432f", "__REALTIME_TIMESTAMP" : "1558207206423115", "__MONOTONIC_TIMESTAMP" : "191889810", "_BOOT_ID" : "b34b5a1a5ad94eeab67652882ed98344", "PRIORITY" : "6", "_UID" : "0", "_GID" : "0", "_SYSTEMD_SLICE" : "system.slice", "_MACHINE_ID" : "9b69a058a8524952980ebe1b698b707d", "_TRANSPORT" : "journal", "_CAP_EFFECTIVE" : "1fffffffff", "_COMM" : "dockerd-current", "_EXE" : "/usr/bin/dockerd-current", "_SYSTEMD_CGROUP" : "/system.slice/docker.service", "_SYSTEMD_UNIT" : "docker.service", "_HOSTNAME" : "istio", "_PID" : "5098", "_CMDLINE" : "/usr/bin/dockerd-current -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --authorization-plugin rhel-push-plugin --selinux-enabled --log-driver=journald --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --add-registry registry.access.redhat.com --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 172.30.0.0/16:5000 --insecure-registry 192.168.99.108:2376 --insecure-registry 172.30.0.0/16", "_SELINUX_CONTEXT" : "system_u:system_r:container_runtime_t:s0", "MESSAGE" : "Launching defaultServer (WebSphere Application Server 19.0.0.1/wlp-1.0.24.cl190120190124-2339) on IBM J9 VM, version 8.0.5.27 - pxa6480sr5fp27-20190104_01(SR5 FP27) (en_US)", "CONTAINER_NAME" : "k8s_reviews_reviews-v2-86886574d6-rhrmv_bookinfo_7953fc06-74ce-11e9-becd-08002701e80b_10", "CONTAINER_TAG" : "157aa2baa3e3", "CONTAINER_ID" : "157aa2baa3e3", "CONTAINER_ID_FULL" : "157aa2baa3e36a4a1e9641fa6709a47b86071de2097b4218594e7f174c5f4a24", "_SOURCE_REALTIME_TIMESTAMP" : "1558207206413135" }

Logstash log entry:

2019-05-18 19:38:14 +0000 [error]: record cannot use elasticsearch index name type project_full: record is missing kubernetes field: {"CONTAINER_TAG"=>"c0b9a2b31555", "systemd"=>{"t"=>{"BOOT_ID"=>"b34b5a1a5ad94eeab67652882ed98344", "CAP_EFFECTIVE"=>"1fffffffff", "CMDLINE"=>"/usr/bin/dockerd-current -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --authorization-plugin rhel-push-plugin --selinux-enabled --log-driver=journald --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --add-registry registry.access.redhat.com --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=virtualbox --insecure-registry 172.30.0.0/16:5000 --insecure-registry 192.168.99.108:2376 --insecure-registry 172.30.0.0/16", "COMM"=>"dockerd-current", "EXE"=>"/usr/bin/dockerd-current", "GID"=>"0", "MACHINE_ID"=>"9b69a058a8524952980ebe1b698b707d", "PID"=>"5098", "SELINUX_CONTEXT"=>"system_u:system_r:container_runtime_t:s0", "SYSTEMD_CGROUP"=>"/system.slice/docker.service", "SYSTEMD_SLICE"=>"system.slice", "SYSTEMD_UNIT"=>"docker.service", "TRANSPORT"=>"journal", "UID"=>"0"}}, "level"=>"info", "message"=>"[2019-05-18 19:38:12.831][15][warning][misc] [external/envoy/source/common/protobuf/utility.cc:129] Using deprecated option 'envoy.api.v2.listener.Filter.config'. This configuration will be removed from Envoy soon. Please see https://github.com/envoyproxy/envoy/blob/master/DEPRECATED.md for details.", "hostname"=>"istio", "pipeline_metadata"=>{"collector"=>{"ipaddr4"=>"172.17.0.30", "ipaddr6"=>"fe80::42:acff:fe11:1e", "inputname"=>"fluent-plugin-systemd", "name"=>"fluentd", "received_at"=>"2019-05-18T19:38:14.269759+00:00", "version"=>"0.12.43 1.6.0"}}, "@timestamp"=>"2019-05-18T19:38:12.853752+00:00"}



Version-Release number of selected component (if applicable):
cdk 3.11.98
openshift-logging installed with ansible
openshift-ansible-openshift-ansible-3.11.115-1.zip

How reproducible:
Install openshift-logging through ansible on a minishift instance running 3.11.98 and cdk 3.8.0

log driver cannot be changed to json-file, so journald is the only option.

Steps to Reproduce:
1.Install openshift-logging through ansible on a minishift instance running 3.11.98 and cdk 3.8.0
2.deploy an application
3.log into kibana
4. navigate to the projects.* index

Actual results:
no records are present


Expected results:
record from project containers could be retrieved


Additional info:
Could this be because  of logstash configuration?

Comment 1 Rich Megginson 2019-05-18 22:48:05 UTC
> Could this be because  of logstash configuration?

You mean fluentd, not logstash.

I don't know what is the problem.

please provide your

/etc/sysconfig/docker

/etc/docker/daemon.json

oc -n openshift-logging get configmap logging-fluentd -o yaml

oc -n openshift-logging get daemonset logging-fluentd -o yaml

Comment 2 arylwen 2019-05-18 23:08:48 UTC
I did mean fluentd, *duh*. 

/etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS="--selinux-enabled --log-driver=journald -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server.pem --tlskey=/etc/docker/server-key.pem --tlsveri
fy"
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi

# Do not add registries in this file anymore. Use /etc/containers/registries.conf
# instead. For more information reference the registries.conf(5) man page.

# Location used for temporary files, such as those created by
# docker load and build operations. Default is /var/lib/docker/tmp
# Can be overriden by setting the following environment variable.
# DOCKER_TMPDIR=/var/tmp

# Controls the /etc/cron.daily/docker-logrotate cron job status.
# To disable, uncomment the line below.
# LOGROTATE=false

# docker-latest daemon can be used by starting the docker-latest unitfile.
# To use docker-latest client, uncomment below lines
#DOCKERBINARY=/usr/bin/docker-latest
#DOCKERDBINARY=/usr/bin/dockerd-latest
#DOCKER_CONTAINERD_BINARY=/usr/bin/docker-containerd-latest
#DOCKER_CONTAINERD_SHIM_BINARY=/usr/bin/docker-containerd-shim-latest

/etc/docker/daemon.json
{}

oc -n openshift-logging get configmap logging-fluentd -o yaml
apiVersion: v1
data:
  fluent.conf: |
    # This file is the fluentd configuration entrypoint. Edit with care.

    @include configs.d/openshift/system.conf

    # In each section below, pre- and post- includes don't include anything initially;
    # they exist to enable future additions to openshift conf as needed.

    ## sources
    ## ordered so that syslog always runs last...
    @include configs.d/openshift/input-pre-*.conf
    @include configs.d/dynamic/input-docker-*.conf
    @include configs.d/dynamic/input-syslog-*.conf
    @include configs.d/openshift/input-post-*.conf
    ##

    <label @INGRESS>
    ## filters
      @include configs.d/openshift/filter-pre-*.conf
      @include configs.d/openshift/filter-retag-journal.conf
      @include configs.d/openshift/filter-k8s-meta.conf
      @include configs.d/openshift/filter-kibana-transform.conf
      @include configs.d/openshift/filter-k8s-flatten-hash.conf
      @include configs.d/openshift/filter-k8s-record-transform.conf
      @include configs.d/openshift/filter-syslog-record-transform.conf
      @include configs.d/openshift/filter-viaq-data-model.conf
      @include configs.d/openshift/filter-post-*.conf
    ##
    </label>

    <label @OUTPUT>
    ## matches
      @include configs.d/openshift/output-pre-*.conf
      @include configs.d/openshift/output-operations.conf
      @include configs.d/openshift/output-applications.conf
      # no post - applications.conf matches everything left
    ##
    </label>
  secure-forward.conf: |
    # <store>
    # @type secure_forward

    # self_hostname ${hostname}
    # shared_key <SECRET_STRING>

    # secure yes
    # enable_strict_verification yes

    # ca_cert_path /etc/fluent/keys/your_ca_cert
    # ca_private_key_path /etc/fluent/keys/your_private_key
      # for private CA secret key
    # ca_private_key_passphrase passphrase

    # <server>
      # or IP
    #   host server.fqdn.example.com
    #   port 24284
    # </server>
    # <server>
      # ip address to connect
    #   host 203.0.113.8
      # specify hostlabel for FQDN verification if ipaddress is used for host
    #   hostlabel server.fqdn.example.com
    # </server>
    # </store>
  throttle-config.yaml: |
    # Logging example fluentd throttling config file

    #example-project:
    #  read_lines_limit: 10
    #
    #.operations:
    #  read_lines_limit: 100
kind: ConfigMap
metadata:
  creationTimestamp: 2019-05-18T01:12:34Z
  name: logging-fluentd
  namespace: openshift-logging
  resourceVersion: "1313711"
  selfLink: /api/v1/namespaces/openshift-logging/configmaps/logging-fluentd
  uid: 04516412-790a-11e9-a901-08002701e80b

oc -n openshift-logging get daemonset logging-fluentd -o yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  annotations:
    prometheus.io/port: "24231"
    prometheus.io/scheme: https
    prometheus.io/scrape: "true"
  creationTimestamp: 2019-05-18T20:05:15Z
  finalizers:
  - foregroundDeletion
  generation: 1
  labels:
    component: fluentd
    logging-infra: fluentd
    provider: openshift
  name: logging-fluentd
  namespace: openshift-logging
  resourceVersion: "1546942"
  selfLink: /apis/extensions/v1beta1/namespaces/openshift-logging/daemonsets/logging-fluentd
  uid: 4061a986-79a8-11e9-b543-08002701e80b
spec:
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      component: fluentd
      provider: openshift
  template:
    metadata:
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ""
      creationTimestamp: null
      labels:
        component: fluentd
        logging-infra: fluentd
        provider: openshift
      name: fluentd-elasticsearch
    spec:
      containers:
      - env:
        - name: MERGE_JSON_LOG
          value: "true"
        - name: K8S_HOST_URL
          value: https://kubernetes.default.svc.cluster.local
        - name: ES_HOST
          value: logging-es
        - name: ES_PORT
          value: "9200"
        - name: ES_CLIENT_CERT
          value: /etc/fluent/keys/cert
        - name: ES_CLIENT_KEY
          value: /etc/fluent/keys/key
        - name: ES_CA
          value: /etc/fluent/keys/ca
        - name: OPS_HOST
          value: logging-es
        - name: OPS_PORT
          value: "9200"
        - name: OPS_CLIENT_CERT
          value: /etc/fluent/keys/ops-cert
        - name: OPS_CLIENT_KEY
          value: /etc/fluent/keys/ops-key
        - name: OPS_CA
          value: /etc/fluent/keys/ops-ca
        - name: JOURNAL_SOURCE
        - name: JOURNAL_READ_FROM_HEAD
        - name: BUFFER_QUEUE_LIMIT
          value: "32"
        - name: BUFFER_SIZE_LIMIT
          value: 8m
        - name: FLUENTD_CPU_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: fluentd-elasticsearch
              divisor: "0"
              resource: limits.cpu
        - name: FLUENTD_MEMORY_LIMIT
          valueFrom:
            resourceFieldRef:
              containerName: fluentd-elasticsearch
              divisor: "0"
              resource: limits.memory
        - name: FILE_BUFFER_LIMIT
          value: 256Mi
        - name: USE_JOURNAL
          value: "true"
        image: docker.io/openshift/origin-logging-fluentd:v3.11
        imagePullPolicy: IfNotPresent
        name: fluentd-elasticsearch
        resources:
          limits:
            memory: 756Mi
          requests:
            cpu: 100m
            memory: 756Mi
        securityContext:
          privileged: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /run/log/journal
          name: runlogjournal
        - mountPath: /var/log
          name: varlog
        - mountPath: /var/lib/docker
          name: varlibdockercontainers
          readOnly: true
        - mountPath: /etc/fluent/configs.d/user
          name: config
          readOnly: true
        - mountPath: /etc/fluent/keys
          name: certs
          readOnly: true
        - mountPath: /etc/docker-hostname
          name: dockerhostname
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/sysconfig/docker
          name: dockercfg
          readOnly: true
        - mountPath: /etc/origin/node
          name: originnodecfg
          readOnly: true
        - mountPath: /etc/docker
          name: dockerdaemoncfg
          readOnly: true
        - mountPath: /var/lib/fluentd
          name: filebufferstorage
      dnsPolicy: ClusterFirst
      nodeSelector:
        logging-infra-fluentd: "true"
      priorityClassName: cluster-logging
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: aggregated-logging-fluentd
      serviceAccountName: aggregated-logging-fluentd
      terminationGracePeriodSeconds: 30
      tolerations:
      - operator: Exists
      volumes:
      - hostPath:
          path: /run/log/journal
          type: ""
        name: runlogjournal
      - hostPath:
          path: /var/log
          type: ""
        name: varlog
      - hostPath:
          path: /var/lib/docker
          type: ""
        name: varlibdockercontainers
      - configMap:
          defaultMode: 420
          name: logging-fluentd
        name: config
      - name: certs
        secret:
          defaultMode: 420
          secretName: logging-fluentd
      - hostPath:
          path: /etc/hostname
          type: ""
        name: dockerhostname
      - hostPath:
          path: /etc/localtime
          type: ""
        name: localtime
      - hostPath:
          path: /etc/sysconfig/docker
          type: ""
        name: dockercfg
      - hostPath:
          path: /etc/origin/node
          type: ""
        name: originnodecfg
      - hostPath:
          path: /etc/docker
          type: ""
        name: dockerdaemoncfg
      - hostPath:
          path: /var/lib/fluentd
          type: ""
        name: filebufferstorage
  templateGeneration: 3
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
status:
  currentNumberScheduled: 1
  desiredNumberScheduled: 1
  numberAvailable: 1
  numberMisscheduled: 0
  numberReady: 1
  observedGeneration: 1
  updatedNumberScheduled: 1

Comment 3 Rich Megginson 2019-05-18 23:16:08 UTC
That looks good.  Please provide the full fluentd log. I'd like to see if there are any errors at fluentd start up, or otherwise before the "record cannot use elasticsearch index name type project_full".

One possibility is that the fluent-plugin-kubernetes_metadata_filter plugin is not working for some reason (e.g. can't connect to kubernetes).

Comment 4 arylwen 2019-05-18 23:38:55 UTC
Created attachment 1570707 [details]
logging-fluentd

fluentd logs

Comment 5 arylwen 2019-05-18 23:46:01 UTC
I executed:
oc exec logging-fluentd-s4cvp -- logs  > logging-fluentd-s4cvp.txt

The log is attached.

How do I check the plugin?

I can find all the log entries in the orphaned.* index, without the pod/container names.

Comment 6 Rich Megginson 2019-05-19 20:38:55 UTC
(In reply to arylwen from comment #5)
> I executed:
> oc exec logging-fluentd-s4cvp -- logs  > logging-fluentd-s4cvp.txt
> 
> The log is attached.

Thanks.

> 
> How do I check the plugin?

I was hoping the fluentd log would have something other than "record cannot use elasticsearch index name type project_full" to indicate some error somewhere else, but apparently not.

> 
> I can find all the log entries in the orphaned.* index, without the
> pod/container names.

because it cannot find the kubernetes metadata, it puts the records in the orphaned index

Try this:

oc edit configmap logging-fluentd

comment out this line

    @include configs.d/openshift/system.conf

Just below it add this

    <system>
      @log_level trace
    </system>

Then restart fluentd (i.e. oc delete the pod)

Let's see if we get any clues from the fluentd logs

Comment 7 arylwen 2019-05-19 22:00:28 UTC
Created attachment 1571042 [details]
logging-fluentd with startup debug

I also did
oc adm policy add-scc-to-user privileged system:serviceaccount:logging:aggregated-logging-fluentd

However is did not change the outcome, the container logs are still landing in the orphaned index.

I also get the message "No matching indices found: No indices match pattern "project.*" when I try to refresh the index pattern.

Comment 8 Rich Megginson 2019-05-19 23:18:43 UTC
I think the auto-detection for the docker log driver is not working.  When fluentd starts up, it looks for /etc/docker/daemon.json and /etc/sysconfig/docker from the node where it is running (it mounts these into the pod/container) to see what is the log driver.  Try this:

oc set env ds/logging-fluentd DEBUG=true VERBOSE=true

and this should tell us what the auto-detection logic is doing.

Comment 9 arylwen 2019-05-20 00:50:54 UTC
Created attachment 1571052 [details]
logging-fluentd with extended debug

Logging with extended debug attached.

Comment 10 Noriko Hosoi 2019-05-20 01:56:32 UTC
(In reply to Rich Megginson from comment #8)
> I think the auto-detection for the docker log driver is not working.  When
> fluentd starts up, it looks for /etc/docker/daemon.json and
> /etc/sysconfig/docker from the node where it is running (it mounts these
> into the pod/container) to see what is the log driver.  

You are right, @Rich!  This grep [1] works if the value of OPTIONS is single-quoted; but does not work if double-quoted as shown in #c2...

[1] - https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/run.sh#L75

@arylwen, could you please replace your OPTIONS line in /etc/sysconfig/docker with this one and retry to start fluentd?
OPTIONS='--selinux-enabled --log-driver=journald -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlscacert=/etc/docker/ca.pem --tlscert=/etc/docker/server.pem --tlskey=/etc/docker/server-key.pem --tlsverify'

@Rich, should we change run.sh to support double-quotes?

diff --git a/fluentd/run.sh b/fluentd/run.sh
index 9b71942d..a20d4ca0 100644
--- a/fluentd/run.sh
+++ b/fluentd/run.sh
@@ -72,7 +72,7 @@ docker_uses_journal() {
         if grep -q '^[^#].*"log-driver":.*journald' /etc/docker/daemon.json 2> /dev/null ; then
             return 0
         fi
-    elif grep -q "^OPTIONS='[^']*--log-driver[   =][     ]*journald" /etc/sysconfig/docker 2> /dev/null ; then
+    elif grep -q "^OPTIONS=['\"][^']*--log-driver[   =][     ]*journald" /etc/sysconfig/docker 2> /dev/null ; then
         return 0
     fi
     return 1

Comment 11 Rich Megginson 2019-05-20 03:47:00 UTC
(In reply to Noriko Hosoi from comment #10)
> (In reply to Rich Megginson from comment #8)
> > I think the auto-detection for the docker log driver is not working.  When
> > fluentd starts up, it looks for /etc/docker/daemon.json and
> > /etc/sysconfig/docker from the node where it is running (it mounts these
> > into the pod/container) to see what is the log driver.  
> 
> You are right, @Rich!  This grep [1] works if the value of OPTIONS is
> single-quoted; but does not work if double-quoted as shown in #c2...
> 
> [1] -
> https://github.com/openshift/origin-aggregated-logging/blob/master/fluentd/
> run.sh#L75
> 
> @arylwen, could you please replace your OPTIONS line in
> /etc/sysconfig/docker with this one and retry to start fluentd?
> OPTIONS='--selinux-enabled --log-driver=journald -H tcp://0.0.0.0:2376 -H
> unix:///var/run/docker.sock --tlscacert=/etc/docker/ca.pem
> --tlscert=/etc/docker/server.pem --tlskey=/etc/docker/server-key.pem
> --tlsverify'
> 
> @Rich, should we change run.sh to support double-quotes?
> 
> diff --git a/fluentd/run.sh b/fluentd/run.sh
> index 9b71942d..a20d4ca0 100644
> --- a/fluentd/run.sh
> +++ b/fluentd/run.sh
> @@ -72,7 +72,7 @@ docker_uses_journal() {
>          if grep -q '^[^#].*"log-driver":.*journald' /etc/docker/daemon.json
> 2> /dev/null ; then
>              return 0
>          fi
> -    elif grep -q "^OPTIONS='[^']*--log-driver[   =][     ]*journald"
> /etc/sysconfig/docker 2> /dev/null ; then
> +    elif grep -q "^OPTIONS=['\"][^']*--log-driver[   =][     ]*journald"
> /etc/sysconfig/docker 2> /dev/null ; then
>          return 0
>      fi
>      return 1

No, what we should do is change fluent-plugin-kubernetes_metadata_filter so that it doesn't matter if use_journal is set or not.  https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/pull/154

Comment 12 Noriko Hosoi 2019-05-20 04:16:22 UTC
(In reply to Rich Megginson from comment #11)
> No, what we should do is change fluent-plugin-kubernetes_metadata_filter so
> that it doesn't matter if use_journal is set or not. 
> https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter/pull/
> 154

I see.  Thanks.

Comment 13 arylwen 2019-05-20 12:43:58 UTC
Works like a charm! Logs are displaying the pod and project name. Beautiful! Thank you!

Here are some notes:

1. I noticed that openshift-logging and istio-system logs are directed to projects.*. Shouldn't they be in .operations.*?
2. The config is somewhat painful for minishift,  I have to make that change every time I start minishift, since the /etc configuration is not persisted.
3. Unfortunately changing minishift to use json-file does not work either
4. Looking forward to the version 4.0 CDK to support istio and openshift-logging :)

Thank you so much for your help!

Comment 14 Rich Megginson 2019-05-20 14:57:16 UTC
(In reply to arylwen from comment #13)
> Works like a charm! Logs are displaying the pod and project name. Beautiful!
> Thank you!
> 
> Here are some notes:
> 
> 1. I noticed that openshift-logging and istio-system logs are directed to
> projects.*. Shouldn't they be in .operations.*?

openshift-logging should be in .operations.*

istio-system should not because the name doesn't start with "openshift" or "kube".  If you think it should be in .operations.* please file another bz.

> 2. The config is somewhat painful for minishift,  I have to make that change
> every time I start minishift, since the /etc configuration is not persisted.

This will be the subject of the bug fix.

> 3. Unfortunately changing minishift to use json-file does not work either
> 4. Looking forward to the version 4.0 CDK to support istio and
> openshift-logging :)

I have no idea if 4.0 CDK will support logging.  But that will be the subject of another bz if not . . .

> 
> Thank you so much for your help!

Comment 16 Rich Megginson 2019-06-26 15:35:35 UTC

*** This bug has been marked as a duplicate of bug 1722380 ***


Note You need to log in before you can comment on or make changes to this bug.