Bug 1891779

Summary: dnsPolicy of kube-scheduler apiserver and controller-manager not aligned with hostNetwork
Product: OpenShift Container Platform Reporter: Maciej Szulik <maszulik>
Component: kube-schedulerAssignee: Maciej Szulik <maszulik>
Status: CLOSED WONTFIX QA Contact: RamaKasturi <knarra>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.6CC: aos-bugs, knarra, mfojtik, pbertera
Target Milestone: ---   
Target Release: 4.6.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1889308
: 1891781 (view as bug list) Environment:
Last Closed: 2020-11-03 16:31:26 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1889308    
Bug Blocks: 1891781    

Comment 1 RamaKasturi 2020-11-02 09:58:09 UTC
Verified with cluster bot by launching a cluster with all the above three prs and i see that fix is working as expected.

[knarra@knarra flexy-templates]$ oc get clusterversion
NAME      VERSION                                           AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.ci.test-2020-11-02-080108-ci-ln-pv4n1z2   True        False         70m     Cluster version is 4.6.0-0.ci.test-2020-11-02-080108-ci-ln-pv4n1z2


kube-scheduler:
===================
[knarra@knarra flexy-templates]$ oc get pod openshift-kube-scheduler-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-scheduler -o yaml | grep 'hostNetwork'
        f:hostNetwork: {}
  hostNetwork: true
[knarra@knarra flexy-templates]$ oc get pod openshift-kube-scheduler-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-scheduler -o yaml | grep 'dnsPolicy'
        f:dnsPolicy: {}
  dnsPolicy: ClusterFirstWithHostNet

openshift-kube-controller-manager:
==================================
[knarra@knarra flexy-templates]$ oc get pod kube-controller-manager-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-controller-manager -o yaml | grep 'hostNetwork'
        f:hostNetwork: {}
  hostNetwork: true
[knarra@knarra flexy-templates]$ oc get pod kube-controller-manager-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-controller-manager -o yaml | grep 'dnsPolicy'
        f:dnsPolicy: {}
  dnsPolicy: ClusterFirstWithHostNet

openshift-kube-apiserver:
============================
[knarra@knarra flexy-templates]$ oc get pod kube-apiserver-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-apiserver -o yaml | grep 'hostNetwork'
        f:hostNetwork: {}
  hostNetwork: true
[knarra@knarra flexy-templates]$ oc get pod kube-apiserver-ci-ln-pv4n1z2-f76d1-hzwhk-master-0 -n openshift-kube-apiserver -o yaml | grep 'dnsPolicy'
        f:dnsPolicy: {}
  dnsPolicy: ClusterFirstWithHostNet

will verify the bot to move this bug to Verified state once a payload is available for the same.

Comment 2 Maciej Szulik 2020-11-03 16:31:26 UTC
This change is causing issues during startup b/c ClusterFirstWithHostNet dns policy forces in-cluster dns server which is not available during core elements startup.
I'm currently discussing how to solve this issue for kubelet first. I'm closing this as won't fix for now.