Bug 1818182 - KubePodCrashLooping: Pod openshift-sdn/sdn-zbc27
Summary: KubePodCrashLooping: Pod openshift-sdn/sdn-zbc27
Keywords:
Status: CLOSED DUPLICATE of bug 1817657
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Ben Bennett
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-03-27 22:46 UTC by Hongkai Liu
Modified: 2020-03-30 14:39 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-03-30 14:39:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
sdn-zbc27.log (147.50 KB, text/plain)
2020-03-27 22:46 UTC, Hongkai Liu
no flags Details
ovs-vf24q.log (2.92 MB, text/plain)
2020-03-27 22:48 UTC, Hongkai Liu
no flags Details

Description Hongkai Liu 2020-03-27 22:46:59 UTC
Created attachment 1674193 [details]
sdn-zbc27.log

Description of problem:
[FIRING:1] KubePodCrashLooping kube-state-metrics (sdn https-main 10.129.64.17:8443 openshift-sdn sdn-zbc27 openshift-monitoring/k8s kube-state-metrics critical)
Pod openshift-sdn/sdn-zbc27 (sdn) is restarting 0.42 times / 5 minutes.

AlertManager fired the above one this afternoon.

https://coreos.slack.com/archives/CHY2E1BL4/p1585335910021700

What caused the restarts/CrashLooping?
Should we do anything when such an alert happens?

Version-Release number of selected component (if applicable):
oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.3.0-0.nightly-2020-03-23-130439   True        False         3d9h    Cluster version is 4.3.0-0.nightly-2020-03-23-130439

How reproducible:
Saw it once

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
The previous pod log (sdn-zbc27.log) is attached.

oc get pod -n openshift-sdn -o wide | grep ip-10-0-138-234.ec2.internal
ovs-vf24q              1/1     Running   1          3d10h   10.0.138.234   ip-10-0-138-234.ec2.internal   <none>           <none>
sdn-zbc27              1/1     Running   4          3d10h   10.0.138.234   ip-10-0-138-234.ec2.internal   <none>           <none>

Comment 1 Hongkai Liu 2020-03-27 22:48:08 UTC
Created attachment 1674194 [details]
ovs-vf24q.log

Comment 2 Ben Bennett 2020-03-30 14:39:06 UTC

*** This bug has been marked as a duplicate of bug 1817657 ***


Note You need to log in before you can comment on or make changes to this bug.