Created attachment 1700734 [details] ovnkube-master logs Description of problem: 1. After master replacement network operator is degraded [kni@provisionhost-0-0 ~]$ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.0-0.nightly-2020-07-11-040723 True True True 6h8m 2. ovnkube-master pod on the new master in CrashLoopBackOff [kni@provisionhost-0-0 ~]$ oc get pods -n openshift-ovn-kubernetes -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES ovnkube-master-678q7 2/4 CrashLoopBackOff 17 66m 192.168.123.100 master-0-3 <none> <none> ovnkube-master-dpnbf 4/4 Running 2 6h15m 192.168.123.146 master-0-2 <none> <none> ovnkube-master-r47jh 4/4 Running 0 6h15m 192.168.123.111 master-0-1 <none> <none> No other pods are reported as problematic: [kni@provisionhost-0-0 ~]$ oc get pods -A |grep -vE "Running|Complete" NAMESPACE NAME READY STATUS RESTARTS AGE openshift-ovn-kubernetes ovnkube-master-678q7 2/4 CrashLoopBackOff 17 69m Version-Release number of the following components: Client Version: 4.5.0-0.nightly-2020-06-05-214616 Server Version: 4.5.0-0.nightly-2020-07-11-040723 Kubernetes Version: v1.18.3+8b0a82f How reproducible: Spotted on 2 setups Steps to Reproduce: 1. Run https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-31848 2. After the procedure passed, run $ oc get co $ oc get pods -A |grep -vE "Running|Complete" Actual results: see above Expected results: Additional info: attaching tar containing: crashing pod logs running pod's sbdb container
*** This bug has been marked as a duplicate of bug 1858767 ***