Created attachment 1675522 [details] Kubelet logs Description of problem:Tried bringing up OVNK8s cluster on Azure/AWS but fails to bring up the worker nodes. The env config is 3 Masters 3 workers. The cluster installed successfully on OpenShiftSDN. Attaching kubelet logs as well # oc get nodes NAME STATUS ROLES AGE VERSION ip-x-x-x-x.compute.internal Ready master 111m v1.17.1 ip-x-x-x-x.compute.internal Ready master 111m v1.17.1 ip-x-x-x-x.compute.internal Ready master 111m v1.17.1 # oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE cloud-credential True False False 114m network 4.5.0-0.nightly-2020-04-01-103434 True True False 109m # oc get pods -n openshift-ovn-kubernetes NAME READY STATUS RESTARTS AGE ovnkube-master-b5h8j 4/4 Running 0 125m ovnkube-master-lgzz9 4/4 Running 0 125m ovnkube-master-vhbpw 4/4 Running 0 125m ovnkube-node-28x8r 2/2 Running 0 125m ovnkube-node-d8pv5 2/2 Running 0 125m ovnkube-node-k2vbn 2/2 Running 0 125m ovs-node-5mk5j 1/1 Running 0 125m ovs-node-bkmmt 1/1 Running 0 125m ovs-node-zg97n 1/1 Running 0 125m Version-Release number of selected component (if applicable):4.5.0-0.nightly-2020-04-01-103434 How reproducible:Always Steps to Reproduce: 1.Bring up OVNKubernetes cluster 2. 3. Actual results:Cluster install fails Expected results:Cluster should be installed successfully Additional info: Kubelet logs are attached and last successful build was 4.5.0-0.nightly-2020-03-30-083935.
same issue with https://bugzilla.redhat.com/show_bug.cgi?id=1819611
*** This bug has been marked as a duplicate of bug 1819611 ***
Yep, this seems good on latest 4.5.0-0.nightly-2020-04-02-101459