Description of problem: When configuring a log forwarding instance with multiple outputs which uses the same secret, the CLO fails to update the ds/fluentd. Version-Release number of selected component (if applicable): $ oc get clusterversions.config.openshift.io NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.13 True False 14d Cluster version is 4.5.13 $ oc get csv NAME DISPLAY VERSION REPLACES PHASE clusterlogging.4.5.0-202010090328.p0 Cluster Logging 4.5.0-202010090328.p0 clusterlogging.4.5.0-202009240830.p0 Succeeded elasticsearch-operator.4.5.0-202010081312.p0 Elasticsearch Operator 4.5.0-202010081312.p0 elasticsearch-operator.4.5.0-202009260615.p0 Succeeded Steps to Reproduce: 1. Create the secret for TLS communication with an external fluentd oc create secret generic external-fluentd --from-file=tls.crt=fluentd.crt --from-file=tls.key=fluentd.key --from-file=ca-bundle.crt=ROOT+CA.crt --from-literal=shared_key=blaa 2. Create a log forwarding instance with the below resource apiVersion: logging.openshift.io/v1alpha1 kind: LogForwarding metadata: name: instance namespace: openshift-logging spec: disableDefaultForwarding: true outputs: - name: external-fluentd-1 type: forward endpoint: 'fluentd-01.luji.io:24224' secret: name: external-fluentd - name: external-fluentd-2 type: forward endpoint: 'fluentd-02.luji.io:24224' secret: name: external-fluentd pipelines: - name: app-pipeline inputType: logs.app outputRefs: - external-fluentd-1 - external-fluentd-2 - name: infra-pipeline inputType: logs.infra outputRefs: - external-fluentd-1 - external-fluentd-2 - name: clo-default-audit-pipeline inputType: logs.audit outputRefs: - external-fluentd-1 - external-fluentd-2 Actual results: The CLO cannot update ds/fluentd with the secret, and start throwing the below error: {"level":"error","ts":1603274734.4781244,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"logforwarding-controller","request":"openshift-logging/instance","error":"Unable to reconcile collection for \"instance\": Failure creating Fluentd Daemonset DaemonSet.apps \"fluentd\" is invalid: spec.template.spec.containers[0].volumeMounts[16].mountPath: Invalid value: \"/var/run/ocp-collector/secrets/external-fluentd\": must be unique","stacktrace":"github.com/openshift/cluster-logging-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1603274734.785573,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"trustedcabundle-controller","request":"openshift-logging/fluentd-trusted-ca-bundle","error":"Failure creating Fluentd Daemonset DaemonSet.apps \"fluentd\" is invalid: spec.template.spec.containers[0].volumeMounts[16].mountPath: Invalid value: \"/var/run/ocp-collector/secrets/external-fluentd\": must be unique","stacktrace":"github.com/openshift/cluster-logging-operator/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/github.com/go-logr/zapr/zapr.go:128\ngithub.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\ngithub.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\ngithub.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\ngithub.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\ngithub.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/cluster-logging-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} Expected results: The CLO must update the ds/fluentd correctly and new pods must roll out using the same secret. Additional info: The issue with a customer while configuring the logforwarding using the API to send the logs to five external fluentd all using the same credentials/certificate. As a workaround we created five secrets, each one corresponding to an output. The solution isn't practical because when the certificates needs rotation, the same action must be applied on 5 identical resources. //edited Just putting the configuration which worked; Create the secret oc create secret generic external-fluentd-01 --from-file=tls.crt=fluentd.crt --from-file=fluentd.key=mail.key --from-file=ca-bundle.crt=ROOT+CA.crt --from-literal=shared_key=blaa oc create secret generic external-fluentd-02 --from-file=tls.crt=fluentd.crt --from-file=fluentd.key=mail.key --from-file=ca-bundle.crt=ROOT+CA.crt --from-literal=shared_key=blaa Create the logforwarding instance apiVersion: logging.openshift.io/v1alpha1 kind: LogForwarding metadata: name: instance namespace: openshift-logging spec: disableDefaultForwarding: true outputs: - name: external-fluentd-1 type: forward endpoint: 'fluentd-01.luji.io:24224' secret: name: external-fluentd-01 - name: external-fluentd-2 type: forward endpoint: 'fluentd-02.luji.io:24224' secret: name: external-fluentd-02 pipelines: - name: app-pipeline inputType: logs.app outputRefs: - external-fluentd-1 - external-fluentd-2 - name: infra-pipeline inputType: logs.infra outputRefs: - external-fluentd-1 - external-fluentd-2 - name: clo-default-audit-pipeline inputType: logs.audit outputRefs: - external-fluentd-1 - external-fluentd-2 Which results in: $ oc describe ds fluentd Name: fluentd Selector: component=fluentd,logging-infra=fluentd,provider=openshift Node-Selector: kubernetes.io/os=linux ... output truncated ... Mounts: /etc/docker from dockerdaemoncfg (ro) /etc/fluent/configs.d/secure-forward from secureforwardconfig (ro) ... output truncated ... /var/log from varlog (rw) /var/run/ocp-collector/secrets/external-fluentd-01 from external-fluentd-1 (rw) /var/run/ocp-collector/secrets/external-fluentd-02 from external-fluentd-2 (rw) Volumes: runlogjournal: Type: HostPath (bare host directory volume) Path: /run/log/journal ... output truncated ... external-fluentd-1: Type: Secret (a volume populated by a Secret) SecretName: external-fluentd-01 Optional: false external-fluentd-2: Type: Secret (a volume populated by a Secret) SecretName: external-fluentd-02 Optional: false runlogjournal: Type: HostPath (bare host directory volume) Path: /run/log/journal ... output truncated ... external-fluentd-1: Type: Secret (a volume populated by a Secret) SecretName: external-fluentd-01 Optional: false external-fluentd-2: Type: Secret (a volume populated by a Secret) SecretName: external-fluentd-02 Optional: false
Fixed by PR https://github.com/openshift/cluster-logging-operator/pull/823 awaiting merge.
Verified with cluster-logging.5.0.0-34. Fluentd can forward logs to different receivers with same secret.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Errata Advisory for Openshift Logging 5.0.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0652