Description of problem: [As seen in https://bugzilla.redhat.com/show_bug.cgi?id=1804681] level=error msg="Cluster operator kube-controller-manager Degraded is True with MultipleConditionsMatching: NodeControllerDegraded: The master nodes not ready: node \"ci-op-n856n-m-0.c.openshift-gce-devel-ci.internal\" not ready since 2020-02-19 05:31:39 +0000 UTC because KubeletNotReady (runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: Missing CNI default network)\nNodeInstallerDegraded: 1 nodes are failing on revision 3:\nNodeInstallerDegraded: \nInstallerPodContainerWaitingDegraded: Pod \"installer-4-ci-op-n856n-m-2.c.openshift-gce-devel-ci.internal\" on node \"ci-op-n856n-m-2.c.openshift-gce-devel-ci.internal\" container \"installer\" is waiting for 36m3.68513657s because \"\"\nInstallerPodNetworkingDegraded: Pod \"installer-4-ci-op-n856n-m-2.c.openshift-gce-devel-ci.internal\" on node \"ci-op-n856n-m-2.c.openshift-gce-devel-ci.internal\" observed degraded networking: Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-ci-op-n856n-m-2.c.openshift-gce-devel-ci.internal_openshift-kube-controller-manager_45778f91-2fc1-4cf7-a05c-6bc2dd79bf9f_0(15cdbe4051ec6ae29b1c89ee64ffa369f36d9fc3fbd846c4adc4352261e28937): Multus: error adding pod to network \"ovn-kubernetes\": delegateAdd: error invoking DelegateAdd - \"ovn-k8s-cni-overlay\": error in getting result from AddNetwork: CNI request failed with status 400: 'failed to get pod annotation: timed out waiting for the condition\nInstallerPodNetworkingDegraded: '" Version-Release number of selected component (if applicable): Seen in 4.3 Expected results: The error message above should better identify what the problem is. This may just be expected if the pod has not yet started, or we should be shouting more clearly that there is a networking problem if something unrecoverable happened. The problem is that the "Missing CNI default network" is expected when a node starts and pods are started before the networking is ready. But the above message looks a bit different and may be a separate problem. If it is, we need to make it clear what to look at. Additional info:
I believe this is actually a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1777040 -- which handles up dating the "Missing CNI default network" language.
Hi This bug has no update since a couple of months, what is the impact of this? Can we close it? 1) That "Missing CNI default network might have dissapeared since" 2) If #comment 1 is true, then this should be closed. -Alex
*** This bug has been marked as a duplicate of bug 1777040 ***