Description of problem: When trying to enable hybrid overlay at cluster deployment time using a custom manifest degrades CNO and prevents the cluster from being deployed Version-Release number of selected component (if applicable): How reproducible: create the cluster-network-03-config.yml ```apiVersion: operator.openshift.io/v1 kind: Network metadata: creationTimestamp: null name: cluster spec: clusterNetwork: - cidr: 192.168.0.0/16 hostPrefix: 23 externalIP: policy: {} networkType: OVNKubernetes serviceNetwork: - 198.223.0.0/16 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 8500 hybridOverlayConfig: hybridClusterNetwork: [] ``` copy to <CLUSTER-NAME>/manifests run the installer Steps to Reproduce: 1. 2. 3. Actual results: ```Aug 11 14:50:14 master-0 hyperkube[2992]: E0811 14:50:14.532683 2992 kubelet.go:2194] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started? Aug 11 14:50:19 master-0 hyperkube[2992]: E0811 14:50:19.533203 2992 kubelet.go:2194] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?``` ```apiVersion: operator.openshift.io/v1 kind: Network metadata: creationTimestamp: "2020-08-11T14:23:27Z" generation: 2 name: cluster resourceVersion: "3779" selfLink: /apis/operator.openshift.io/v1/networks/cluster uid: f67aded8-9b14-4c81-bee2-23e7a7cf1cb5 spec: clusterNetwork: - cidr: 172.10.0.0/16 hostPrefix: 23 defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: [] mtu: 8500 type: OVNKubernetes logLevel: "" serviceNetwork: - 172.30.0.0/16 status: {}``` Expected results: Additional info: we tried different variations on the custom manifest...with the following results ```[@bastion ~]$ cat cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: creationTimestamp: null name: cluster spec: clusterNetwork: - cidr: 192.168.0.0/16 hostPrefix: 23 externalIP: policy: {} networkType: OVNKubernetes serviceNetwork: - 198.223.0.0/16 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 8500 hybridOverlayConfig: OVNHybridOverlayEnable: true``` RESULT: ```[root@worker-107 ~]# oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE cloud-credential True False False 22m network True [root@worker-107 ~]# oc describe co network Name: network Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: ClusterOperator Metadata: Creation Timestamp: 2020-08-11T14:08:26Z Generation: 1 Resource Version: 3754 Self Link: /apis/config.openshift.io/v1/clusteroperators/network UID: cc74f0d5-820e-4b5c-af71-36f7b3ffcba4 Spec: Status: Conditions: Last Transition Time: 2020-08-11T14:08:26Z Message: Error while trying to update operator configuration: could not update object (operator.openshift.io/v1, Kind=Network) /cluster: Network.operator.openshift.io "cluster" is invalid: spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig.hybridClusterNetwork: Invalid value: "null": spec.defaultNetwork.ovnKubernetesConfig.hybridOverlayConfig.hybridClusterNetwork in body must be of type array: "null" Reason: ApplyOperatorConfig Status: True Type: Degraded Last Transition Time: 2020-08-11T14:08:26Z Status: True Type: Upgradeable Extension: <nil> Events: <none> [root@worker-107 ~]#```
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196