Bug 1744384

Summary: openshift-apiserver pods do not remove proxy env vars after proxy/cluster removes proxy fields
Product: OpenShift Container Platform Reporter: Xingxing Xia <xxia>
Component: openshift-apiserverAssignee: Standa Laznicka <slaznick>
Status: CLOSED ERRATA QA Contact: Xingxing Xia <xxia>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.2.0CC: aos-bugs, mfojtik, slaznick
Target Milestone: ---   
Target Release: 4.2.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: ref 1738432
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-10-16 06:37:03 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Xingxing Xia 2019-08-22 03:33:33 UTC
Description of problem:
openshift-apiserver pods do not remove proxy env vars after proxy/cluster removes. kube-apiserver does not have the issue.

Version-Release number of selected component (if applicable):
4.2.0-0.nightly-2019-08-20-002921

How reproducible:
Always

Steps to Reproduce:
1. oc edit proxy cluster # add below and save
...
spec:
  httpProxy: http://proxy-user1:...@139.178.76.57:3128
  httpsProxy: http://proxy-user1:...@139.178.76.57:3128
  noProxy: test.no-proxy.com
...

2. oc get po --all-namespaces -w # wait all proxy-affected cluster component pods to finish restarting

3. oc edit proxy cluster # remove httpProxy, httpsProxy, noProxy, and save

4. oc get po --all-namespaces -w # wait all proxy-affected cluster component pods to finish restarting

Actual results:
2. Check openshift-apiserver and kube-apiserver pods yaml, all containers have env vars:
    env:
    - name: HTTPS_PROXY
      value: http://proxy-user1:...@139.178.76.57:3128
    - name: HTTP_PROXY
      value: http://proxy-user1:...@139.178.76.57:3128
    - name: NO_PROXY
      value: 10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int...qe.devcluster.openshift.com,api...qe.devcluster.openshift.com,etcd-0...qe.devcluster.openshift.com,etcd-1...qe.devcluster.openshift.com,etcd-2...qe.devcluster.openshift.com,localhost,test.no-proxy.com

4. check step 2 again, kube-apiserver pods are terminated, restarted and now have no proxy env vars, but openshift-apiserver pods are not terminated and thus still keep the env vars (openshiftapiserver/cluster managementState is Managed).

Expected results:
4. openshift-apiserver pods should terminate, restart and remove proxy env vars

Additional info:
During above steps, openshiftapiserver/cluster managementState is Managed without modification, and there workloadcontroller.proxy is seen.
spec:
  logLevel: ""
  managementState: Managed
  observedConfig:
...
    workloadcontroller:
      proxy:
        HTTP_PROXY: http://proxy-user1:...@139.178.76.57:3128
        HTTPS_PROXY: http://proxy-user1:...@139.178.76.57:3128
        NO_PROXY: 10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int...qe.devcluster.openshift.com,api...qe.devcluster.openshift.com,etcd-0...qe.devcluster.openshift.com,etcd-1...qe.devcluster.openshift.com,etcd-2...qe.devcluster.openshift.com,localhost,test.no-proxy.com

Comment 1 Michal Fojtik 2019-08-26 09:41:25 UTC
https://github.com/openshift/cluster-openshift-apiserver-operator/pull/226 has the latest bump that should fix this BZ.

Comment 2 Xingxing Xia 2019-08-28 08:45:05 UTC
Tested latest env of 4.2.0-0.nightly-2019-08-28-004049 still reproduces it. Found above pasted PRs are still open instead of merged. So moving back to POST

Comment 5 Xingxing Xia 2019-09-10 00:20:03 UTC
Sorry for late back to this bug from engaged with other bugs and testings.
Verified in 4.2.0-0.nightly-2019-09-08-180038. Now changing/removing the proxy fields in the CR, oas pods can change/remove proxy env vars well.

Comment 6 errata-xmlrpc 2019-10-16 06:37:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922