Bug 1824983
| Summary: | [4.3 upgrade][alert] KubePodCrashLooping: Pod openshift-sdn/sdn-24wfh (sdn) is restarting 0.42 times / 5 minutes. | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Hongkai Liu <hongkliu> |
| Component: | Networking | Assignee: | Jacob Tanenbaum <jtanenba> |
| Networking sub component: | openshift-sdn | QA Contact: | huirwang |
| Status: | CLOSED ERRATA | Docs Contact: | |
| Severity: | high | ||
| Priority: | high | CC: | aconstan, ccoleman, rkhan, wking |
| Version: | 4.3.0 | Keywords: | Upgrades |
| Target Milestone: | --- | ||
| Target Release: | 4.6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-10-27 15:57:47 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Hongkai Liu
2020-04-16 19:16:27 UTC
In response to internal discussion, the alert is based on the kube_pod_container_status_restarts_total metric, so the container exit code should not come into it. CI search turns up a few jobs like this in the past 2 weeks [2]. Scrolling through to find one that is the sdn pod turns up [3]. Logs for the previous container [4] end with: warning: Another process is currently listening on the CNI socket, waiting 15s ... error: Another process is currently listening on the CNI socket, exiting Not sure if that 4.5 CI run failure mode is the same one Hongkai hit in 4.3. [1]: https://github.com/openshift/cluster-monitoring-operator/blob/edc056dd3e46f3bd47306310f43beee29fc5090c/assets/prometheus-k8s/rules.yaml#L1160-L1170 [2]: https://search.svc.ci.openshift.org/?search=KubePodCrashLooping.*openshift-sdn&maxAge=14d [3]: https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-ci-test/pr-logs/pull/openshift-kni_cnf-features-deploy/211/pull-ci-openshift-kni-cnf-features-deploy-master-e2e-gcp-origin/351 [4]: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift-kni_cnf-features-deploy/211/pull-ci-openshift-kni-cnf-features-deploy-master-e2e-gcp-origin/351/artifacts/e2e-gcp-origin/pods/openshift-sdn_sdn-zwpq7_sdn_previous.log This error shows up in the logs a lot
Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:ovsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process)
which could cause the sdn pods to spin while waiting for the ovs pods to come up. Adding to the ovs.yaml to explicitly delete the pidfile on exit
Verified in 4.6.0-0.nightly-2020-06-09-234748.
Did upgrade from 4.5.0-0.nightly-2020-06-09-223121 to 4.6.0-0.nightly-2020-06-09-234748. Checked the alerts in prometheus console, no FIRING alert about KubePodCrashLooping. Also use "count(max by (_id) (alerts{alertstate="firing",alertname="KubePodCrashLooping",namespace="openshift-sdn",pod=~"sdn-.*"} offset 48h))" to check in prometheus console , result is "no data". Move it to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 |