Bug 1856084 - After master replacement ovnkube-master pod on the new master in CrashLoopBackOff
Summary: After master replacement ovnkube-master pod on the new master in CrashLoopBac...
Keywords:
Status: CLOSED DUPLICATE of bug 1858767
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.6.0
Assignee: Ben Bennett
QA Contact: Anurag saxena
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-12 15:12 UTC by Lubov
Modified: 2020-07-21 13:29 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-21 13:29:28 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
ovnkube-master logs (350.00 KB, application/x-tar)
2020-07-12 15:12 UTC, Lubov
no flags Details

Description Lubov 2020-07-12 15:12:27 UTC
Created attachment 1700734 [details]
ovnkube-master logs

Description of problem:
1. After master replacement network operator is degraded
[kni@provisionhost-0-0 ~]$ oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
network                                    4.5.0-0.nightly-2020-07-11-040723   True        True          True       6h8m

2. ovnkube-master pod on the new master in CrashLoopBackOff
[kni@provisionhost-0-0 ~]$ oc get pods -n openshift-ovn-kubernetes -o wide
NAME                   READY   STATUS             RESTARTS   AGE     IP                NODE         NOMINATED NODE   READINESS GATES
ovnkube-master-678q7   2/4     CrashLoopBackOff   17         66m     192.168.123.100   master-0-3   <none>           <none>
ovnkube-master-dpnbf   4/4     Running            2          6h15m   192.168.123.146   master-0-2   <none>           <none>
ovnkube-master-r47jh   4/4     Running            0          6h15m   192.168.123.111   master-0-1   <none>           <none>

No other pods are reported as problematic:
[kni@provisionhost-0-0 ~]$ oc get pods -A |grep -vE "Running|Complete"
NAMESPACE                                          NAME                                                         READY   STATUS             RESTARTS   AGE
openshift-ovn-kubernetes                           ovnkube-master-678q7                                         2/4     CrashLoopBackOff   17         69m
 

Version-Release number of the following components:
Client Version: 4.5.0-0.nightly-2020-06-05-214616
Server Version: 4.5.0-0.nightly-2020-07-11-040723
Kubernetes Version: v1.18.3+8b0a82f

How reproducible:
Spotted on 2 setups

Steps to Reproduce:
1. Run https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-31848
2. After the procedure passed, run 
$ oc get co
$ oc get pods -A |grep -vE "Running|Complete"

Actual results:
see above

Expected results:

Additional info:
attaching tar containing:
crashing pod logs 
running pod's sbdb container

Comment 2 Ben Bennett 2020-07-21 13:29:28 UTC

*** This bug has been marked as a duplicate of bug 1858767 ***


Note You need to log in before you can comment on or make changes to this bug.