Description of problem: Nodes become NotReady due to Missing CNI default network with OVN plugin on Azure. When new nodes are added to cluster through auto-scaling, nodes stay in NotReady state with below messages. runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network Version-Release number of selected component (if applicable): $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-09-25-191732 True False 4h21m Cluster version is 4.2.0-0.nightly-2019-09-25-191732 How reproducible: Steps to Reproduce: 1. Setup IPI on Azure with networkType: "OVNKubernetes" 2. Try to add nodes through auto or manually scaling. 3. Actual results: See below panic in ovnkube-node logs: # oc logs -n openshift-ovn-kubernetes ovnkube-node-xdzhq -c ovn-node ================== ovnkube.sh --- version: 3 ================ ==================== command: ovn-node =================== hostname: shared-42-upgrade-j48bc-worker-centralus3-6m6qq =================== daemonset version 3 =================== Image built from ovn-kubernetes ref: refs/heads/rhaos-4.2-rhel-7 commit: 93f3f5b6d94ebcc16b869697f84b5d6f988d7cf8 =============== ovn-node - (wait for ovs) =============== ovn-node - (wait for ready_to_start_node) ovn_nbdb tcp://10.0.0.5:9641 ovn_sbdb tcp://10.0.0.5:9642 ovn_nbdb_test tcp:10.0.0.5:9641 =============== ovn-node - (ovn-node wait for ovn-controller.pid) =============== ovn-node --init-node info: Waiting for process_ready ovnkube to come up, waiting 1s ... panic: failed to localnet gateway: No chassis ID configured for node shared-42-upgrade-j48bc-worker-centralus3-6m6qq goroutine 1 [running]: main.runOvnKube(0xc00029a580, 0x0, 0x0) /go-controller/_output/go/src/github.com/ovn-org/ovn-kubernetes/go-controller/cmd/ovnkube/ovnkube.go:234 +0x9fe main.main.func1(0xc00029a580, 0xc00029a580, 0xc000111cf7) /go-controller/_output/go/src/github.com/ovn-org/ovn-kubernetes/go-controller/cmd/ovnkube/ovnkube.go:100 +0x2b github.com/ovn-org/ovn-kubernetes/go-controller/vendor/github.com/urfave/cli.HandleAction(0x126ec40, 0x14d3d50, 0xc00029a580, 0xc000284d80, 0x0) /go-controller/_output/go/src/github.com/ovn-org/ovn-kubernetes/go-controller/vendor/github.com/urfave/cli/app.go:502 +0xbe github.com/ovn-org/ovn-kubernetes/go-controller/vendor/github.com/urfave/cli.(*App).Run(0xc000109880, 0xc0000ae000, 0xf, 0xf, 0x0, 0x0) /go-controller/_output/go/src/github.com/ovn-org/ovn-kubernetes/go-controller/vendor/github.com/urfave/cli/app.go:268 +0x5b7 main.main() /go-controller/_output/go/src/github.com/ovn-org/ovn-kubernetes/go-controller/cmd/ovnkube/ovnkube.go:103 +0x59b info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... info: Waiting for process_ready ovnkube to come up, waiting 5s ... Expected results: Nodes should be in Ready state Additional info:
Fixed by https://github.com/ovn-org/ovn-kubernetes/pull/799 upstream and should be in 4.3 already.
*** Bug 1755910 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:0062