Verified with cluster bot by launching a cluster with all the above three prs and i see that fix is working as expected. [knarra@knarra flexy-templates]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.ci.test-2020-11-02-080108-ci-ln-pv4n1z2 True False 70m Cluster version is 4.6.0-0.ci.test-2020-11-02-080108-ci-ln-pv4n1z2 kube-scheduler: =================== [knarra@knarra flexy-templates]$ oc get pod openshift-kube-scheduler-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-scheduler -o yaml | grep 'hostNetwork' f:hostNetwork: {} hostNetwork: true [knarra@knarra flexy-templates]$ oc get pod openshift-kube-scheduler-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-scheduler -o yaml | grep 'dnsPolicy' f:dnsPolicy: {} dnsPolicy: ClusterFirstWithHostNet openshift-kube-controller-manager: ================================== [knarra@knarra flexy-templates]$ oc get pod kube-controller-manager-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-controller-manager -o yaml | grep 'hostNetwork' f:hostNetwork: {} hostNetwork: true [knarra@knarra flexy-templates]$ oc get pod kube-controller-manager-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-controller-manager -o yaml | grep 'dnsPolicy' f:dnsPolicy: {} dnsPolicy: ClusterFirstWithHostNet openshift-kube-apiserver: ============================ [knarra@knarra flexy-templates]$ oc get pod kube-apiserver-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-apiserver -o yaml | grep 'hostNetwork' f:hostNetwork: {} hostNetwork: true [knarra@knarra flexy-templates]$ oc get pod kube-apiserver-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-apiserver -o yaml | grep 'dnsPolicy' f:dnsPolicy: {} dnsPolicy: ClusterFirstWithHostNet will verify the bot to move this bug to Verified state once a payload is available for the same.
This change is causing issues during startup b/c ClusterFirstWithHostNet dns policy forces in-cluster dns server which is not available during core elements startup. I'm currently discussing how to solve this issue for kubelet first. I'm closing this as won't fix for now.