Bug 1824983 - [4.3 upgrade][alert] KubePodCrashLooping: Pod openshift-sdn/sdn-24wfh (sdn) is restarting 0.42 times / 5 minutes.
Summary: [4.3 upgrade][alert] KubePodCrashLooping: Pod openshift-sdn/sdn-24wfh (sdn) i...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Jacob Tanenbaum
QA Contact: huirwang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-04-16 19:16 UTC by Hongkai Liu
Modified: 2020-10-27 15:58 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 15:57:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 661 0 None closed Bug 1824983: explicitly delete pidfiles when exiting 2021-01-22 02:11:43 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 15:58:06 UTC

Internal Links: 1824981

Description Hongkai Liu 2020-04-16 19:16:27 UTC
During upgrade of a cluster in CI build farm, we have seen a sequence of alerts and messages of failures from clusterversion.

oc --context build01 adm upgrade --allow-explicit-upgrade --to-image registry.svc.ci.openshift.org/ocp/release:4.3.0-0.nightly-2020-04-13-190424 --force=true

Eventually upgrade was completed successfully (which is so nice).
But those alerts and messages are too frightening.

I would like to create a bug for each of those and feel better for the next upgrade.

https://coreos.slack.com/archives/CHY2E1BL4/p1587059410443100


[FIRING:1] KubePodCrashLooping kube-state-metrics (sdn https-main 10.128.237.134:8443 openshift-sdn sdn-24wfh openshift-monitoring/k8s kube-state-metrics critical)
Pod openshift-sdn/sdn-24wfh (sdn) is restarting 0.42 times / 5 minutes.

Comment 9 W. Trevor King 2020-05-25 02:39:33 UTC
In response to internal discussion, the alert is based on the kube_pod_container_status_restarts_total metric, so the container exit code should not come into it.  CI search turns up a few jobs like this in the past 2 weeks [2].  Scrolling through to find one that is the sdn pod turns up [3].  Logs for the previous container [4] end with:

  warning: Another process is currently listening on the CNI socket, waiting 15s ...
  error: Another process is currently listening on the CNI socket, exiting

Not sure if that 4.5 CI run failure mode is the same one Hongkai hit in 4.3.

[1]: https://github.com/openshift/cluster-monitoring-operator/blob/edc056dd3e46f3bd47306310f43beee29fc5090c/assets/prometheus-k8s/rules.yaml#L1160-L1170
[2]: https://search.svc.ci.openshift.org/?search=KubePodCrashLooping.*openshift-sdn&maxAge=14d
[3]: https://deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gcs/origin-ci-test/pr-logs/pull/openshift-kni_cnf-features-deploy/211/pull-ci-openshift-kni-cnf-features-deploy-master-e2e-gcp-origin/351
[4]: https://storage.googleapis.com/origin-ci-test/pr-logs/pull/openshift-kni_cnf-features-deploy/211/pull-ci-openshift-kni-cnf-features-deploy-master-e2e-gcp-origin/351/artifacts/e2e-gcp-origin/pods/openshift-sdn_sdn-zwpq7_sdn_previous.log

Comment 10 Jacob Tanenbaum 2020-06-04 18:15:53 UTC
This error shows up in the logs a lot 

Terminated:&ContainerStateTerminated{ExitCode:1,Signal:0,Reason:Error,Message:ovsdb-server: /var/run/openvswitch/ovsdb-server.pid: pidfile check failed (No such process)

which could cause the sdn pods to spin while waiting for the ovs pods to come up. Adding to the ovs.yaml to explicitly delete the pidfile on exit

Comment 13 huirwang 2020-06-10 08:58:03 UTC
Verified in 4.6.0-0.nightly-2020-06-09-234748.

Did upgrade from 4.5.0-0.nightly-2020-06-09-223121 to 4.6.0-0.nightly-2020-06-09-234748. Checked the alerts in prometheus console, no FIRING alert about KubePodCrashLooping. Also use "count(max by (_id) (alerts{alertstate="firing",alertname="KubePodCrashLooping",namespace="openshift-sdn",pod=~"sdn-.*"} offset 48h))" to check in prometheus console , result is "no data".  Move it to verified.

Comment 15 errata-xmlrpc 2020-10-27 15:57:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.