During upgrade of a cluster in CI build farm, we have seen a sequence of alerts and messages of failures from clusterversion. oc --context build01 adm upgrade --allow-explicit-upgrade --to-image registry.svc.ci.openshift.org/ocp/release:4.3.0-0.nightly-2020-04-13-190424 --force=true Eventually upgrade was completed successfully (which is so nice). But those alerts and messages are too frightening. I would like to create a bug for each of those and feel better for the next upgrade. https://coreos.slack.com/archives/CHY2E1BL4/p1587059410443100 [FIRING:1] KubePodCrashLooping kube-state-metrics (sdn https-main 10.128.237.134:8443 openshift-sdn sdn-24wfh openshift-monitoring/k8s kube-state-metrics critical) Pod openshift-sdn/sdn-24wfh (sdn) is restarting 0.42 times / 5 minutes.
In response to internal discussion, the alert is based on the kube_pod_container_status_restarts_total metric, so the container exit code should not come into it. CI search turns up a few jobs like this in the past 2 weeks [2]. Scrolling through to find one that is the sdn pod turns up [3]. Logs for the previous container [4] end with: warning: Another process is currently listening on the CNI socket, waiting 15s ... error: Another process is currently listening on the CNI socket, exiting Not sure if that 4.5 CI run failure mode is the same one Hongkai hit in 4.3. [1]: https://github.com/openshift/cluster-monitoring-operator/blob/edc056dd3e46f3bd47306310f43beee29fc5090c/assets/prometheus-k8s/rules.yaml#L1160-L1170 [2]: https://search.svc.ci.openshift.org/?search=KubePodCrashLooping.*openshift-sdn&maxAge=14d [3]: https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-ci-test/pr-logs/pull/openshift-kni_cnf-features-deploy/211/pull-ci-openshift-kni-cnf-features-deploy-master-e2e-gcp-origin/351 [4]: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift-kni_cnf-features-deploy/211/pull-ci-openshift-kni-cnf-features-deploy-master-e2e-gcp-origin/351/artifacts/e2e-gcp-origin/pods/openshift-sdn_sdn-zwpq7_sdn_previous.log
This error shows up in the logs a lot Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:ovsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process) which could cause the sdn pods to spin while waiting for the ovs pods to come up. Adding to the ovs.yaml to explicitly delete the pidfile on exit
Verified in 4.6.0-0.nightly-2020-06-09-234748. Did upgrade from 4.5.0-0.nightly-2020-06-09-223121 to 4.6.0-0.nightly-2020-06-09-234748. Checked the alerts in prometheus console, no FIRING alert about KubePodCrashLooping. Also use "count(max by (_id) (alerts{alertstate="firing",alertname="KubePodCrashLooping",namespace="openshift-sdn",pod=~"sdn-.*"} offset 48h))" to check in prometheus console , result is "no data". Move it to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196