The kibana pod could not be restarted. the pod is ContainerCreating and terminating on and on. Version-Release number of selected component (if applicable): clusterlogging.4.3.14-202004222058 How reproducible: Always Steps to Reproduce: 1. Deploy cluster logging via OLM 2. Create clusterlogging CRD apiVersion: "logging.openshift.io/v1" kind: "ClusterLogging" metadata: name: "instance" namespace: openshift-logging spec: managementState: "Managed" logStore: type: "elasticsearch" elasticsearch: nodeCount: 1 resources: limits: memory: 2Gi requests: cpu: 200m memory: 2Gi storage: {} redundancyPolicy: "ZeroRedundancy" visualization: type: "kibana" kibana: replicas: 1 collection: logs: type: "fluentd" fluentd: {} 3. check the pod status oc get pods NAME READY STATUS RESTARTS AGE cluster-logging-operator-5db74bfdfb-c5cfd 1/1 Running 0 2m43s elasticsearch-cdm-2a5nkm9b-1-756cf5f49-8m2r4 2/2 Running 0 2m6s fluentd-fhnmq 1/1 Running 1 118s fluentd-nf4qr 1/1 Running 1 2m2s fluentd-rjmv5 1/1 Running 1 2m3s fluentd-t69mm 1/1 Running 1 114s fluentd-xjp7b 1/1 Running 2 119s fluentd-z7g7r 1/1 Running 1 2m3s kibana-67f5f6774b-28r5s 0/2 Terminating 0 38s kibana-67f5f6774b-4f4tj 0/2 Terminating 0 34s kibana-67f5f6774b-4q8cq 0/2 ContainerCreating 0 0s kibana-67f5f6774b-5w9nl 0/2 Terminating 0 19s kibana-67f5f6774b-9g8zc 0/2 Terminating 0 42s kibana-67f5f6774b-f7tks 2/2 Terminating 0 87s kibana-67f5f6774b-h4qwh 2/2 Terminating 0 59s kibana-67f5f6774b-nf256 0/2 Terminating 0 15s kibana-67f5f6774b-sjghh 0/2 Terminating 1 68s kibana-67f5f6774b-tbgfw 1/2 Terminating 0 50s kibana-67f5f6774b-thn7k 0/2 Terminating 0 26s kibana-67f5f6774b-wcm2n 0/2 Terminating 0 8s kibana-67f5f6774b-xzdtp 0/2 Terminating 0 54s kibana-869d5455b9-8r5tb 0/2 Terminating 0 53s kibana-869d5455b9-8v7v8 1/2 Terminating 0 36s kibana-869d5455b9-bvqct 1/2 Terminating 0 40s kibana-869d5455b9-cx2f5 0/2 Terminating 0 23s kibana-869d5455b9-fk25g 0/2 Terminating 0 57s kibana-869d5455b9-kvkj2 0/2 Terminating 0 88s kibana-869d5455b9-l94p7 2/2 Terminating 0 32s kibana-869d5455b9-lbp4f 0/2 ContainerCreating 0 5s kibana-869d5455b9-lg2hz 0/2 Terminating 1 85s kibana-869d5455b9-n2tmm 0/2 Terminating 0 13s kibana-869d5455b9-s9j2v 0/2 Terminating 1 76s kibana-869d5455b9-w8c72 0/2 Terminating 0 49s Actual: The kibana pod couldn't be started. pod is ContainerCreating and terminating on and on.
Moving to MODIFIED: After investigating cluster I see that the kibana-proxy secret keeps being recreated and looking at the cert storage on the operator pod I see we don't have the expected "kibana-session-secret" file that [1] this pr provides. This leads me to believe that the cluster-logging-image being tested is missing that commit but does contain the commit to restart Kibana when its secrets change [2]. sh-4.2$ ls -1 /tmp/ocp-clo | grep kibana kibana-internal.conf kibana-internal.crt kibana-internal.csr kibana-internal.key system.logging.kibana.conf system.logging.kibana.crt system.logging.kibana.csr system.logging.kibana.key [1] https://github.com/openshift/cluster-logging-operator/pull/449 [2] https://github.com/openshift/cluster-logging-operator/pull/463
Verified using the csv clusterlogging.4.3.14-202004231410 from redhat-operators-stage. The image missing error don't block this bug. oc get pods NAME READY STATUS RESTARTS AGE cluster-logging-operator-b48b75688-cp8rk 1/1 Running 0 88m curator-1587702000-x48ww 1/1 Running 0 48m curator-1587702600-rmxt9 1/1 Running 0 38m curator-1587703200-r9zw6 1/1 Running 0 28m curator-1587703800-x27rm 1/1 Running 0 18m curator-1587704400-krrz6 1/1 Running 0 8m54s elasticsearch-cdm-rkrf6xcu-1-745775957c-fh49k 1/2 ImagePullBackOff 0 51m elasticsearch-cdm-rkrf6xcu-2-9cc497dcb-zk4kn 1/2 ImagePullBackOff 0 51m elasticsearch-cdm-rkrf6xcu-3-5c4469cc9f-dvgng 1/2 ImagePullBackOff 0 50m fluentd-2j4hm 1/1 Running 0 51m fluentd-2xpl4 1/1 Running 0 51m fluentd-7p6c8 1/1 Running 0 51m fluentd-9622j 1/1 Running 0 51m fluentd-gmg2h 1/1 Running 0 51m fluentd-p4wbd 1/1 Running 0 51m kibana-f946dd446-fghs9 2/2 Running 0 51m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1529