Description of problem: The kube-apiserver pods environmental variable NO_PROXY is not being set. $ ./oc --kubeconfig=auth/kubeconfig version Client Version: version.Info{Major:"", Minor:"", GitVersion:"v4.2.0-alpha.0-2-g8fdb79e", GitCommit:"8fdb79e", GitTreeState:"clean", BuildDate:"2019-08-05T20:29:53Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0+8e63b6d", GitCommit:"8e63b6d", GitTreeState:"clean", BuildDate:"2019-08-05T20:45:49Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"} OpenShift Version: 4.2.0-0.okd-2019-08-06-133704 apiVersion: v1 kind: Pod metadata: annotations: kubernetes.io/config.hash: b3565780f87dd51734f1dcf22cef5363 kubernetes.io/config.mirror: b3565780f87dd51734f1dcf22cef5363 kubernetes.io/config.seen: "2019-08-06T16:54:07.377488175Z" kubernetes.io/config.source: file creationTimestamp: "2019-08-06T16:54:07Z" labels: apiserver: "true" app: openshift-kube-apiserver revision: "641" name: kube-apiserver-ip-10-0-137-4.us-east-2.compute.internal namespace: openshift-kube-apiserver resourceVersion: "70394" selfLink: /api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-ip-10-0-137-4.us-east-2.compute.internal uid: ce0e3155-b86a-11e9-a442-0627ffc2281c spec: containers: - args: - --openshift-config=/etc/kubernetes/static-pod-resources/configmaps/config/config.yaml - -v=2 command: - hyperkube - kube-apiserver env: - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: STATIC_POD_VERSION value: "641" - name: HTTPS_PROXY value: http://username:password@x.x.x.x:3128 - name: HTTP_PROXY value: http://username:password@x.x.x.x:3128 image: registry.svc.ci.openshift.org/origin/4.2-2019-08-06-133704@sha256:ebf456cebf52f45284dd58318cdc4654a24fc3724a1099ba466ac2888b94698d imagePullPolicy: IfNotPresent $ ./oc --kubeconfig=auth/kubeconfig get proxies.config.openshift.io cluster -o yaml apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: "2019-08-06T14:08:38Z" generation: 1 name: cluster resourceVersion: "432" selfLink: /apis/config.openshift.io/v1/proxies/cluster uid: aff52ce0-b853-11e9-8172-02db5a36ce8c spec: httpProxy: <redacted> httpsProxy: <redacted> trustedCA: name: "" status: httpProxy: http://jcallen:trustn01@52.73.102.120:3128 httpsProxy: http://jcallen:trustn01@52.73.102.120:3128 noProxy: ',10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.jcallen-pxycvo-2.devcluster.openshift.com,api.jcallen-pxycvo-2.devcluster.openshift.com,etcd-0.jcallen-pxycvo-2.devcluster.openshift.com,etcd-1.jcallen-pxycvo-2.devcluster.openshift.com,etcd-2.jcallen-pxycvo-2.devcluster.openshift.com,localhost'
Currently all of openshift/kube-apiservers and autnentication operators are watching the Proxy spec because there is no component to set the status.
Setting the proxy status with the manifest generated by the installer was resolved with: https://github.com/openshift/library-go/pull/479 https://github.com/openshift/cluster-bootstrap/pull/27
https://github.com/openshift/library-go/pull/501 is an upstream PR which, when merged, will make the kube-apiserver and openshift-apiserver watch the correct fields
https://github.com/openshift/library-go/pull/501 has merged.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922