Bug 1768688 - Got many `Reconciler error` after creating logforwarding CR instance.
Summary: Got many `Reconciler error` after creating logforwarding CR instance.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.3.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-05 02:04 UTC by Qiaoling Tang
Modified: 2020-01-23 11:11 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-01-23 11:10:52 UTC
Target Upstream Version:
Embargoed:
qitang: needinfo-


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 298 0 'None' closed Bug 1768688: Enable status for CRDs 2021-02-18 20:58:21 UTC
Red Hat Product Errata RHBA-2020:0062 0 None None None 2020-01-23 11:11:05 UTC

Description Qiaoling Tang 2019-11-05 02:04:33 UTC
Description of problem:


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 1 Qiaoling Tang 2019-11-05 02:09:22 UTC
Description of problem:
There are many `Reconciler error` in CLO pod after creating logforwarding CR instance:

{"level":"error","ts":1572919272.5097518,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"logforwarding-controller","request":"openshift-logging/instance","error":"logforwardings.logging.openshift.io \"instance\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2019-11-05T02:01:12Z" level=info msg="Updating status of Elasticsearch"
time="2019-11-05T02:01:12Z" level=info msg="Updating status of Kibana for \"instance\""
time="2019-11-05T02:01:12Z" level=info msg="Updating status of Curator"
{"level":"error","ts":1572919272.8506474,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": Unable to generate source configs for supported source types: []","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

$ oc get svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)     AGE
elasticsearch           ClusterIP   172.30.121.67    <none>        9200/TCP    74m
elasticsearch-cluster   ClusterIP   172.30.121.183   <none>        9300/TCP    74m
elasticsearch-metrics   ClusterIP   172.30.94.125    <none>        60000/TCP   74m
fluentd                 ClusterIP   172.30.9.215     <none>        24231/TCP   74m
fluentdserver1          ClusterIP   172.30.86.73     <none>        24224/TCP   32m



$ oc get logforwarding
NAME       AGE
instance   4m46s
$ oc get logforwarding -oyaml
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1alpha1
  kind: LogForwarding
  metadata:
    creationTimestamp: "2019-11-05T01:58:03Z"
    generation: 1
    name: instance
    namespace: openshift-logging
    resourceVersion: "75543"
    selfLink: /apis/logging.openshift.io/v1alpha1/namespaces/openshift-logging/logforwardings/instance
    uid: 3109986c-7721-4df0-9631-f2caea8682c6
  spec:
    outputs:
    - endpoint: elasticsearch.openshift-logging.svc:9200
      name: clo-default-output-es
      secret:
        name: elasticsearch
      type: elasticsearch
    - endpoint: fluentdserver1.openshift-logging.svc:24224
      name: fluentd-created-by-user
      type: forward
    pipelines:
    - name: clo-default-app-pipeline
      outputRefs:
      - clo-default-output-es
      type: logs.app
    - name: clo-default-infra-pipeline
      outputRefs:
      - clo-managaged-output-es
      - fluentd-created-by-user
      type: logs.infra
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

$ oc get clusterlogging instance -oyaml
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
metadata:
  creationTimestamp: "2019-11-05T00:52:51Z"
  generation: 2617
  name: instance
  namespace: openshift-logging
  resourceVersion: "77827"
  selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
  uid: bf6e67db-7318-437d-b50b-f22cf33c9228
spec:
  collection:
    logs:
      fluentd:
        resources: null
      type: fluentd
  curation:
    curator:
      resources: null
      schedule: '*/10 * * * *'
    type: curator
  logStore:
    elasticsearch:
      nodeCount: 3
      redundancyPolicy: SingleRedundancy
      resources:
        requests:
          memory: 4Gi
      storage:
        size: 20Gi
        storageClassName: gp2
    type: elasticsearch
  managementState: Managed
  visualization:
    kibana:
      proxy:
        resources: null
      replicas: 1
      resources: null
    type: kibana
status:
  collection:
    logs:
      fluentdStatus:
        daemonSet: fluentd
        nodes:
          fluentd-688bd: ip-10-0-146-57.ap-south-1.compute.internal
          fluentd-b9xx2: ip-10-0-174-244.ap-south-1.compute.internal
          fluentd-bss79: ip-10-0-138-254.ap-south-1.compute.internal
          fluentd-knxcg: ip-10-0-138-48.ap-south-1.compute.internal
          fluentd-qhzxd: ip-10-0-152-20.ap-south-1.compute.internal
          fluentd-trp7g: ip-10-0-173-155.ap-south-1.compute.internal
        pods:
          failed: []
          notReady: []
          ready:
          - fluentd-688bd
          - fluentd-b9xx2
          - fluentd-bss79
          - fluentd-knxcg
          - fluentd-qhzxd
          - fluentd-trp7g
  curation:
    curatorStatus:
    - clusterCondition:
        curator-1572919200-vdw9z:
        - lastTransitionTime: "2019-11-05T02:03:56Z"
          reason: Completed
          status: "True"
          type: ContainerTerminated
      cronJobs: curator
      schedules: '*/10 * * * *'
      suspended: false
  logStore:
    elasticsearchStatus:
    - ShardAllocationEnabled: all
      cluster:
        activePrimaryShards: 12
        activeShards: 24
        initializingShards: 0
        numDataNodes: 3
        numNodes: 3
        pendingTasks: 0
        relocatingShards: 0
        status: green
        unassignedShards: 0
      clusterName: elasticsearch
      nodeConditions:
        elasticsearch-cdm-5qmds27d-1: []
        elasticsearch-cdm-5qmds27d-2: []
        elasticsearch-cdm-5qmds27d-3: []
      nodeCount: 3
      pods:
        client:
          failed: []
          notReady: []
          ready:
          - elasticsearch-cdm-5qmds27d-1-54f7559655-5d6sd
          - elasticsearch-cdm-5qmds27d-2-5d84d9dbb8-pwz6w
          - elasticsearch-cdm-5qmds27d-3-565c6cb945-csn2c
        data:
          failed: []
          notReady: []
          ready:
          - elasticsearch-cdm-5qmds27d-1-54f7559655-5d6sd
          - elasticsearch-cdm-5qmds27d-2-5d84d9dbb8-pwz6w
          - elasticsearch-cdm-5qmds27d-3-565c6cb945-csn2c
        master:
          failed: []
          notReady: []
          ready:
          - elasticsearch-cdm-5qmds27d-1-54f7559655-5d6sd
          - elasticsearch-cdm-5qmds27d-2-5d84d9dbb8-pwz6w
          - elasticsearch-cdm-5qmds27d-3-565c6cb945-csn2c
  visualization:
    kibanaStatus:
    - deployment: kibana
      pods:
        failed: []
        notReady: []
        ready:
        - kibana-578859988-nkj8r
      replicaSets:
      - kibana-578859988
      replicas: 1

Version-Release number of selected component (if applicable):
quay.io/openshift/origin-cluster-logging-operator@sha256:0f540f4d17b7c19665beb823386ccd6f32a92c99a901a3b7724bdd7834110f25


How reproducible:
Always

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Qiaoling Tang 2019-11-05 02:16:53 UTC
After adding annotations to clusterlogging instance, I still get many error message:
{"level":"error","ts":1572920036.034428,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": Unable to generate source configs for supported source types: []","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Comment 3 Jeff Cantrill 2019-11-05 14:03:16 UTC
Please list the steps taken in order to observe this error/

Comment 4 Qiaoling Tang 2019-11-06 00:17:05 UTC
Sorry, I forgot to add the reproduce steps.
Here are what I did:

1. deploy logging operators via OLM
2. create clusterlogging instance, the file is in the comment 1
3. deploy a fluentd server using https://github.com/openshift-qe/v3-testfiles/blob/master/logging/fluentdserver/forward/deploy.sh, but I didn't change the comfigmap/fluentd
4. create logforwarding instance, the file is in the comment 1
5. check the CLO pod log, there were two errer messages, I put them in comment 1

Then I noticed I forgot to add annotations, so I added it to these two CR instances, after adding the annotations, I only saw the error message that I put in comment 2.


metadata:
  annotations:
    clusterlogging.openshift.io/promtaildevpreview: enabled

Comment 5 Qiaoling Tang 2019-11-06 00:26:16 UTC
(In reply to Qiaoling Tang from comment #4)
> Sorry, I forgot to add the reproduce steps.
> Here are what I did:
> 
> 1. deploy logging operators via OLM
> 2. create clusterlogging instance, the file is in the comment 1
> 3. deploy a fluentd server using
> https://github.com/openshift-qe/v3-testfiles/blob/master/logging/
> fluentdserver/forward/deploy.sh, but I didn't change the comfigmap/fluentd
> 4. create logforwarding instance, the file is in the comment 1
> 5. check the CLO pod log, there were two errer messages, I put them in
> comment 1
> 
> Then I noticed I forgot to add annotations, so I added it to these two CR
> instances, after adding the annotations, I only saw the error message that I
> put in comment 2.
> 
> 
> metadata:
>   annotations:
>     clusterlogging.openshift.io/promtaildevpreview: enabled

The annotations was:
 clusterlogging.openshift.io/logforwardingtechpreview: enabled

Comment 6 Jeff Cantrill 2019-11-07 14:47:55 UTC
I'm still investigating this issue but you may need to modify how you setup the receiving fluent.  The e2e test does: https://github.com/openshift/cluster-logging-operator/blob/master/test/e2e/logforwarding/forward_to_fluent_test.go#L34

Comment 7 Qiaoling Tang 2019-11-08 01:25:54 UTC
Thanks Jeff, I changed the `type` to `inputType` of `spec.pipelines` in the logforwarding CR instance, then I can find logs in the fluentdserver, but I still get some error messages in the CLO pod , they are:

time="2019-11-08T01:18:59Z" level=error msg="Error updating &TypeMeta{Kind:,APIVersion:,}: Operation cannot be fulfilled on clusterloggings.logging.openshift.io \"instance\": the object has been modified; please apply your changes to the latest version and try again"
time="2019-11-08T01:18:59Z" level=info msg="Collector container EnvVar change found, updating \"fluentd\""
time="2019-11-08T01:18:59Z" level=info msg="Collector volumes change found, updating \"fluentd\""
time="2019-11-08T01:18:59Z" level=info msg="Updating status of Fluentd"
time="2019-11-08T01:18:59Z" level=error msg="Error updating &TypeMeta{Kind:,APIVersion:,}: Operation cannot be fulfilled on clusterloggings.logging.openshift.io \"instance\": the object has been modified; please apply your changes to the latest version and try again"
{"level":"error","ts":1573175939.7097843,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"logforwarding-controller","request":"openshift-logging/instance","error":"logforwardings.logging.openshift.io \"instance\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2019-11-08T01:18:59Z" level=info msg="Updating status of Elasticsearch"
time="2019-11-08T01:18:59Z" level=error msg="Error updating &TypeMeta{Kind:ClusterLogging,APIVersion:logging.openshift.io/v1,}: Operation cannot be fulfilled on clusterloggings.logging.openshift.io \"instance\": the object has been modified; please apply your changes to the latest version and try again"
time="2019-11-08T01:19:00Z" level=info msg="Updating status of Kibana for \"instance\""
time="2019-11-08T01:19:00Z" level=error msg="Error updating &TypeMeta{Kind:ClusterLogging,APIVersion:logging.openshift.io/v1,}: Operation cannot be fulfilled on clusterloggings.logging.openshift.io \"instance\": the object has been modified; please apply your changes to the latest version and try again"
time="2019-11-08T01:19:00Z" level=info msg="Updating status of Curator"
time="2019-11-08T01:19:00Z" level=error msg="Error updating &TypeMeta{Kind:ClusterLogging,APIVersion:logging.openshift.io/v1,}: Operation cannot be fulfilled on clusterloggings.logging.openshift.io \"instance\": the object has been modified; please apply your changes to the latest version and try again"
time="2019-11-08T01:19:00Z" level=info msg="Collector container EnvVar change found, updating \"fluentd\""
time="2019-11-08T01:19:00Z" level=info msg="Collector volumes change found, updating \"fluentd\""
time="2019-11-08T01:19:00Z" level=info msg="Updating status of Fluentd"
time="2019-11-08T01:19:00Z" level=error msg="Error updating &TypeMeta{Kind:ClusterLogging,APIVersion:logging.openshift.io/v1,}: Operation cannot be fulfilled on clusterloggings.logging.openshift.io \"instance\": the object has been modified; please apply your changes to the latest version and try again"

Comment 8 Qiaoling Tang 2019-11-14 07:35:24 UTC
An update:

when testing forwarding logs to elasticsearch, once the logforwarding instance is created, I keep getting the error messages I mentioned in comment 2, and the CLO doesn't update the configurations for the fluentd.

Comment 9 Jeff Cantrill 2019-11-14 21:46:26 UTC
I'm still investigating this error message but your setup is not correct.  Using the deployment.yaml it has no mounting of the configmap to configure the receiver.  CLO will manage it's own fluentd, not that of the receiver.  Something like the following will mount your config into the receiver but you have to provide it, something like https://github.com/openshift/cluster-logging-operator/blob/master/test/helpers/fluentd.go#L29:

$ oc get deployment fluentdserver1 -o yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "3"
  creationTimestamp: "2019-11-14T21:01:36Z"
  generation: 3
  labels:
    component: fluentd
    logging-infra: fluentdserver1
    provider: aosqe
  name: fluentdserver1
  namespace: openshift-logging
  resourceVersion: "720750"
  selfLink: /apis/extensions/v1beta1/namespaces/openshift-logging/deployments/fluentdserver1
  uid: f1cbf426-0721-11ea-8fab-0e8f32acdd69
spec:
  progressDeadlineSeconds: 2147483647
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      component: fluentd
      logging-infra: fluentdserver1
      provider: aosqe
  strategy:
    type: Recreate
  template:
    metadata:
      creationTimestamp: null
      labels:
        component: fluentd
        logging-infra: fluentdserver1
        provider: aosqe
    spec:
      containers:
      - image: docker.io/fluent/fluentd:latest
        imagePullPolicy: IfNotPresent
        name: fluentd
        ports:
        - containerPort: 24224
          name: fluentd
          protocol: TCP
        resources: {}
        securityContext:
          privileged: true
          procMount: Default
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
        - mountPath: /fluent/etc
          name: config
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: fluentdserver
      serviceAccountName: fluentdserver
      terminationGracePeriodSeconds: 30
      volumes:
      - configMap:
          defaultMode: 420
          name: fluent-receiver
        name: config

Comment 10 Qiaoling Tang 2019-11-15 00:32:37 UTC
(In reply to Jeff Cantrill from comment #9)
> I'm still investigating this error message but your setup is not correct. 
> Using the deployment.yaml it has no mounting of the configmap to configure
> the receiver.  CLO will manage it's own fluentd, not that of the receiver. 

Yes, in my observation,  when logforwarding is enabled, and logforwarding instance is created, the CLO will update the cm/fluentd for the CLO managed fluentd and restart fluentd pods, but when I set type: elasticsearch and endpoint to an elasticsearch endpoint created by me for the outputs, and create the logforwarding instance, the CLO doesn't update the fluentd pods.

Comment 11 Qiaoling Tang 2019-11-15 03:22:46 UTC
(In reply to Qiaoling Tang from comment #10)
> (In reply to Jeff Cantrill from comment #9)
> > I'm still investigating this error message but your setup is not correct. 
> > Using the deployment.yaml it has no mounting of the configmap to configure
> > the receiver.  CLO will manage it's own fluentd, not that of the receiver. 
> 
> Yes, in my observation,  when logforwarding is enabled, and logforwarding
> instance is created, the CLO will update the cm/fluentd for the CLO managed
> fluentd and restart fluentd pods, but when I set type: elasticsearch and
> endpoint to an elasticsearch endpoint created by me for the outputs, and
> create the logforwarding instance, the CLO doesn't update the fluentd pods.

To be more clear, here are my steps to reproduce the `Reconciler error`:

option 1:
1. deploy CLO and EO
2. deploy a receiver, fluentd or elasticsearch
3. create clusterlogging instance with:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  annotations:
    clusterlogging.openshift.io/logforwardingtechpreview: enabled
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  curation:
    type: "curator"
    curator:
      schedule: "*/10 * * * *"
  collection:
    logs:
      type: "fluentd"
      fluentd: {}
4. check pods in openshift-logging namespace, the fluentd pods are not deployed, check the CLO pod logs, there are many error message:
{"level":"error","ts":1573783865.8084269,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": Unable to generate source configs for supported source types: []","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
5. create logforwarding instance to forward logs to the receiver
6. check the CLO pod log and pods in openshift-logging namespace, there have two different conditions:
condition 1:
the receiver is fluentd and the forward type is forward, then the CLO managed fluentd pods could be deployed and updated to forward logs to the fluentd receiver, there are some error messages:
{"level":"error","ts":1573787909.277955,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"logforwarding-controller","request":"openshift-logging/instance","error":"logforwardings.logging.openshift.io \"instance\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

condition 2:
the receiver is elasticsearch and the forward type is elasticsearch, then the CLO doesn't deploy the fluentd pods, and the CLO pod keep repeating these two error messages:
{"level":"error","ts":1573785710.5921547,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"logforwarding-controller","request":"openshift-logging/instance","error":"logforwardings.logging.openshift.io \"instance\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2019-11-15T02:41:50Z" level=info msg="Updating status of Kibana for \"instance\""
time="2019-11-15T02:41:50Z" level=info msg="Updating status of Curator"
{"level":"error","ts":1573785710.8780751,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": Unable to generate source configs for supported source types: []","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}


option2:
1. deploy CLO and EO via OLM
2. deploy receiver: fluentd or elasticsearch
3. create clusterlogging instance with:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  annotations:
    clusterlogging.openshift.io/logforwardingtechpreview: enabled
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    elasticsearch:
      nodeCount: 1
      redundancyPolicy: "ZeroRedundancy"
      resources:
        requests:
          memory: "4Gi"
      storage:
        storageClassName: "gp2"
        size: "20Gi"
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  curation:
    type: "curator"
    curator:
      schedule: "*/10 * * * *"
  collection:
    logs:
      type: "fluentd"
      fluentd: {}
4. check pods in openshift-logging namespace, the EFK pods are deployed successfully, and no error message in the CLO pod
5. create logforwarding instance to forward logs to the receiver
6. check the CLO pod log
condition 1:
the receiver is fluentd and the forward type is forward: the CLO could update the CLO managed fluentd pods to forward logs to the receiver, I can see the CLO managed fluentds pods were restarted, the docs.count of indices in the clusterlogging managed ES stop increasing, but there are some error message in the CLO pod:
{"level":"error","ts":1573786444.1105566,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"logforwarding-controller","request":"openshift-logging/instance","error":"logforwardings.logging.openshift.io \"instance\" not found","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

condition 2:
the receiver is elasticsearch and the forward type is elasticsearch:  the CLO doesn't update the CLO managed fluentd pods, and there are some error messages in the CLO pod:
{"level":"error","ts":1573787270.583576,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": Unable to generate source configs for supported source types: []","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

{"level":"error","ts":1573787521.370755,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"clusterlogging-controller","request":"openshift-logging/instance","error":"Unable to create or update collection for \"instance\": Unable to generate source configs for supported source types: []","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/_output/src/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}


sorry for misleading you.

Comment 13 Qiaoling Tang 2019-11-18 05:51:59 UTC
Verified with ose-cluster-logging-operator-v4.3.0-201911161914

Comment 15 errata-xmlrpc 2020-01-23 11:10:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0062


Note You need to log in before you can comment on or make changes to this bug.