Bug 1751903

Summary: The default proxy of the pods cannot be overwritten by a specific empty value
Product: OpenShift Container Platform Reporter: Jian Zhang <jiazha>
Component: OLMAssignee: Evan Cordell <ecordell>
OLM sub component: OLM QA Contact: yhui
Status: CLOSED ERRATA Docs Contact:
Severity: medium    
Priority: medium CC: adellape, akashem, bandrade, chuo, dageoffr, dsover, ecordell, jfan, nhale, scolange
Version: 4.2.0Keywords: Reopened
Target Milestone: ---   
Target Release: 4.4.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1804812 (view as bug list) Environment:
Last Closed: 2020-05-04 11:13:32 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1804812    
Bug Blocks:    

Description Jian Zhang 2019-09-13 01:55:01 UTC
Description of problem:
When subscribing an operator with customized proxy, the initial pods with default proxy cannot be removed.
I'm not sure if this is a Pod issue, or CSV controlled this initial pods. report it in OLM component first. Please feel free to move on it to the Pod component if it's a pods issue.


Version-Release number of selected component (if applicable):
Cluster version: 4.2.0-0.nightly-2019-09-12-162357
mac:~ jianzhang$ oc exec catalog-operator-7dd976578-5gd7g -- olm --version
OLM version: 0.11.0
git commit: 201c8aa7ec382092eef251a0e8c812cc5f7d166a

How reproducible:
always

Steps to Reproduce:
1. Install OCP 4.2 with the proxy enabled.
2. Subscribe the etcd operator with customized proxy. Use the double quetos in the value field. Like below:
mac:~ jianzhang$ cat sub-etcd-42-proxy.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd-config-test
  namespace: openshift-operators
spec:
  config:
    env:
    - name: HTTP_PROXY
      value: "test_http"
    - name: HTTPS_PROXY
      value: "test_https"
    - name: NO_PROXY
      value: "test"
  channel: clusterwide-alpha
  installPlanApproval: Automatic
  name: etcd
  source: community-operators
  sourceNamespace: openshift-marketplace
  startingCSV: etcdoperator.v0.9.4-clusterwide

3. Check the deployment of the etcd-operator if set the customized proxy. 


Actual results:
The customized proxy values weren't injected in the operator deployment.

mac:~ jianzhang$ oc get csv -n openshift-operators
NAME                              DISPLAY   VERSION             REPLACES                          PHASE
etcdoperator.v0.9.4-clusterwide   etcd      0.9.4-clusterwide   etcdoperator.v0.9.2-clusterwide   Installing

mac:~ jianzhang$ oc get deployment -n openshift-operators
NAME            READY   UP-TO-DATE   AVAILABLE   AGE
etcd-operator   1/1     1            1           7m17s
mac:~ jianzhang$ oc get pods -n openshift-operators
NAME                             READY   STATUS             RESTARTS   AGE
etcd-operator-55b59d6d78-z7bhk   2/3     CrashLoopBackOff   4          2m21s
etcd-operator-5c7d47f687-kd58k   3/3     Running            0          7m23s

mac:~ jianzhang$ oc get pods -n openshift-operators
NAME                             READY   STATUS             RESTARTS   AGE
etcd-operator-55b59d6d78-z7bhk   2/3     CrashLoopBackOff   8          16m
etcd-operator-5c7d47f687-kd58k   3/3     Running            0          21m
mac:~ jianzhang$ oc delete pods  etcd-operator-5c7d47f687-kd58k -n openshift-operators
pod "etcd-operator-5c7d47f687-kd58k" deleted
mac:~ jianzhang$ oc get pods -n openshift-operators
NAME                             READY   STATUS             RESTARTS   AGE
etcd-operator-55b59d6d78-z7bhk   2/3     CrashLoopBackOff   8          17m
etcd-operator-5c7d47f687-ks47v   3/3     Running            0          9s


Expected results:
The crashed pods as expected since it cannot connect the fake proxy.
But, the running pods with the default proxy should be removed.

Additional info:
mac:~ jianzhang$ oc get deployment -n openshift-operators etcd-operator -o yaml|grep -i "proxy" -A 2
        - name: HTTP_PROXY
          value: test_http
        - name: HTTPS_PROXY
          value: test_https
        - name: NO_PROXY
          value: test
        image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
--
        - name: HTTP_PROXY
          value: test_http
        - name: HTTPS_PROXY
          value: test_https
        - name: NO_PROXY
          value: test
        image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
--
        - name: HTTP_PROXY
          value: test_http
        - name: HTTPS_PROXY
          value: test_https
        - name: NO_PROXY
          value: test
        image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b

mac:~ jianzhang$ oc get pods -n openshift-operators
NAME                             READY   STATUS             RESTARTS   AGE
etcd-operator-55b59d6d78-z7bhk   2/3     CrashLoopBackOff   7          12m
etcd-operator-5c7d47f687-kd58k   3/3     Running            0          17m
mac:~ jianzhang$ oc get pods -n openshift-operators etcd-operator-55b59d6d78-z7bhk -o yaml|grep -i "proxy" -A 2
    - name: HTTP_PROXY
      value: test_http
    - name: HTTPS_PROXY
      value: test_https
    - name: NO_PROXY
      value: test
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
--
    - name: HTTP_PROXY
      value: test_http
    - name: HTTPS_PROXY
      value: test_https
    - name: NO_PROXY
      value: test
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
--
    - name: HTTP_PROXY
      value: test_http
    - name: HTTPS_PROXY
      value: test_https
    - name: NO_PROXY
      value: test
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b

mac:~ jianzhang$ oc get pods -n openshift-operators etcd-operator-5c7d47f687-ks47v -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    alm-examples: |
      [
        {
          "apiVersion": "etcd.database.coreos.com/v1beta2",
          "kind": "EtcdCluster",
          "metadata": {
            "name": "example",
            "annotations": {
              "etcd.database.coreos.com/scope": "clusterwide"
            }
          },
          "spec": {
            "size": 3,
            "version": "3.2.13"
          }
        },
        {
          "apiVersion": "etcd.database.coreos.com/v1beta2",
          "kind": "EtcdRestore",
          "metadata": {
            "name": "example-etcd-cluster-restore"
          },
          "spec": {
            "etcdCluster": {
              "name": "example-etcd-cluster"
            },
            "backupStorageType": "S3",
            "s3": {
              "path": "<full-s3-path>",
              "awsSecret": "<aws-secret>"
            }
          }
        },
        {
          "apiVersion": "etcd.database.coreos.com/v1beta2",
          "kind": "EtcdBackup",
          "metadata": {
            "name": "example-etcd-cluster-backup"
          },
          "spec": {
            "etcdEndpoints": ["<etcd-cluster-endpoints>"],
            "storageType":"S3",
            "s3": {
              "path": "<full-s3-path>",
              "awsSecret": "<aws-secret>"
            }
          }
        }
      ]
    capabilities: Full Lifecycle
    categories: Database
    containerImage: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    createdAt: "2019-02-28 01:03:00"
    description: Create and maintain highly-available etcd clusters on Kubernetes
    olm.operatorGroup: global-operators
    olm.operatorNamespace: openshift-operators
    olm.targetNamespaces: ""
    repository: https://github.com/coreos/etcd-operator
    tectonic-visibility: ocs
  creationTimestamp: "2019-09-13T01:50:37Z"
  generateName: etcd-operator-5c7d47f687-
  labels:
    name: etcd-operator-alm-owned
    pod-template-hash: 5c7d47f687
  name: etcd-operator-5c7d47f687-ks47v
  namespace: openshift-operators
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: etcd-operator-5c7d47f687
    uid: cceb29b1-d5c5-11e9-a905-0292cb65b674
  resourceVersion: "59614"
  selfLink: /api/v1/namespaces/openshift-operators/pods/etcd-operator-5c7d47f687-ks47v
  uid: e1d9dcc0-d5c8-11e9-a905-0292cb65b674
spec:
  containers:
  - command:
    - etcd-operator
    - --create-crd=false
    - -cluster-wide
    env:
    - name: MY_POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: MY_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: HTTP_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-184-21.us-east-2.compute.amazonaws.com:3129
    - name: HTTPS_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-184-21.us-east-2.compute.amazonaws.com:3129
    - name: NO_PROXY
      value: .cluster.local,.svc,.us-east-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-jiazha-proxy.qe.devcluster.openshift.com,api.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-0.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-1.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-2.qe-jiazha-proxy.qe.devcluster.openshift.com,localhost,test.no-proxy.com
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imagePullPolicy: IfNotPresent
    name: etcd-operator
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: etcd-operator-token-lpkrg
      readOnly: true
  - command:
    - etcd-backup-operator
    - --create-crd=false
    env:
    - name: MY_POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: MY_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: HTTP_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-184-21.us-east-2.compute.amazonaws.com:3129
    - name: HTTPS_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-184-21.us-east-2.compute.amazonaws.com:3129
    - name: NO_PROXY
      value: .cluster.local,.svc,.us-east-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-jiazha-proxy.qe.devcluster.openshift.com,api.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-0.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-1.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-2.qe-jiazha-proxy.qe.devcluster.openshift.com,localhost,test.no-proxy.com
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imagePullPolicy: IfNotPresent
    name: etcd-backup-operator
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: etcd-operator-token-lpkrg
      readOnly: true
  - command:
    - etcd-restore-operator
    - --create-crd=false
    env:
    - name: MY_POD_NAMESPACE
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.namespace
    - name: MY_POD_NAME
      valueFrom:
        fieldRef:
          apiVersion: v1
          fieldPath: metadata.name
    - name: HTTP_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-184-21.us-east-2.compute.amazonaws.com:3129
    - name: HTTPS_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-184-21.us-east-2.compute.amazonaws.com:3129
    - name: NO_PROXY
      value: .cluster.local,.svc,.us-east-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-jiazha-proxy.qe.devcluster.openshift.com,api.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-0.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-1.qe-jiazha-proxy.qe.devcluster.openshift.com,etcd-2.qe-jiazha-proxy.qe.devcluster.openshift.com,localhost,test.no-proxy.com
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imagePullPolicy: IfNotPresent
    name: etcd-restore-operator
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: etcd-operator-token-lpkrg
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  imagePullSecrets:
  - name: etcd-operator-dockercfg-w4d8s
  nodeName: ip-10-0-75-73.us-east-2.compute.internal
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: etcd-operator
  serviceAccountName: etcd-operator
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: etcd-operator-token-lpkrg
    secret:
      defaultMode: 420
      secretName: etcd-operator-token-lpkrg
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2019-09-13T01:50:35Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2019-09-13T01:50:38Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2019-09-13T01:50:38Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2019-09-13T01:50:37Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://b1af29f64d97b5070cfb5b4e14e2692c5abea3a4ec6b71a1001a082b13b0d713
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imageID: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    lastState: {}
    name: etcd-backup-operator
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-09-13T01:50:37Z"
  - containerID: cri-o://78d80507f423fa65ebfe062d3bbbfc797eaafd3eb7c483ed6a33d22cd9bfd981
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imageID: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    lastState: {}
    name: etcd-operator
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-09-13T01:50:37Z"
  - containerID: cri-o://35f1cccf04d1b1f7e84d8b673f2f3830b91391cf0b7ff8edea155199f06d501e
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imageID: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    lastState: {}
    name: etcd-restore-operator
    ready: true
    restartCount: 0
    state:
      running:
        startedAt: "2019-09-13T01:50:37Z"
  hostIP: 10.0.75.73
  phase: Running
  podIP: 10.131.0.29
  qosClass: BestEffort
  startTime: "2019-09-13T01:50:35Z

Comment 3 Abu Kashem 2019-09-13 15:02:10 UTC
3. Check the deployment of the etcd-operator if set the customized proxy. 


Actual results:
The customized proxy values weren't injected in the operator deployment.

Jian, can you please verify that this the custom proxy values were actually injected? you have crash looping pods, which happens when fake proxy env vars are injected to the Pod.

Comment 4 Abu Kashem 2019-09-13 15:17:41 UTC
Jian,
I assume you are trying to validate whether you can override the default cluster proxy configuration from an existing Pod. There are two ways we can test this:
1. Specify valid proxy env vars in the Subscription. Because you specified fake env vars, the new pods stay in crashLoop state and the old pod never gets removed ( as Nick has explained ). If you specify valid proxy configuration in the Subscription, the new pod will report healthy and the old pod should be removed.

2. If you don't have valid proxy setup, then you can specify an empty value in the Subscription. Empty proxy var in subscription config is special, this is a way for an admin to override cluster 'proxy' configuration. So the new pod will not have any proxy env var injected.

apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd-config-test
  namespace: openshift-operators
spec:
  config:
    env:
    - name: HTTP_PROXY
   channel: clusterwide-alpha
   installPlanApproval: Automatic
   name: etcd
   source: community-operators
   sourceNamespace: openshift-marketplace
   startingCSV: etcdoperator.v0.9.4-clusterwide

You just need to specify one empty env var in Subscription config.

Comment 5 Evan Cordell 2019-09-13 17:14:59 UTC
Closing this since it looks like there's no issue here. Please re-open if the answers above don't suffice.

Comment 6 Jian Zhang 2019-09-15 13:19:33 UTC
Hi, Nick, Abu

> I think this is just how rolling Deployments work; old pods aren't deleted until new pods are created (if that is the strategy the etcd-operator Deployment uses).

That makes sense, thanks!

> Empty proxy var in subscription config is special, this is a way for an admin to override cluster 'proxy' configuration. So the new pod will not have any proxy env var injected.

OK, thanks for the explanation. I update Step 6, 7 of https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-24566 per this comment, please have a look, thanks!

Comment 7 Jian Zhang 2019-09-15 13:40:20 UTC
Abu,

And, I create a subscription with an empty value below, but the no new pods generated for a long time, is it as expected? Reopen it first.

mac:~ jianzhang$ cat sub-etcd-42-proxy.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd-config-test
  namespace: openshift-operators
spec:
  config:
    env:
    - name: HTTP_PROXY
  channel: clusterwide-alpha
  installPlanApproval: Automatic
  name: etcd
  source: community-operators
  sourceNamespace: openshift-marketplace
  startingCSV: etcdoperator.v0.9.4-clusterwide

mac:~ jianzhang$ oc get sub  -n openshift-operators
NAME               PACKAGE   SOURCE                CHANNEL
etcd-config-test   etcd      community-operators   clusterwide-alpha
mac:~ jianzhang$ oc get csv  -n openshift-operators
NAME                              DISPLAY   VERSION             REPLACES                          PHASE
etcdoperator.v0.9.4-clusterwide   etcd      0.9.4-clusterwide   etcdoperator.v0.9.2-clusterwide   Succeeded
mac:~ jianzhang$ oc get pods  -n openshift-operators
NAME                            READY   STATUS    RESTARTS   AGE
etcd-operator-9b67f8f96-j4nnn   3/3     Running   0          19m


mac:~ jianzhang$ oc get pods etcd-operator-9b67f8f96-j4nnn  -n openshift-operators -o yaml|grep -i "proxy" -A 2
    - name: HTTP_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-77-137.us-east-2.compute.amazonaws.com:3129
    - name: HTTPS_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-77-137.us-east-2.compute.amazonaws.com:3129
    - name: NO_PROXY
      value: .cluster.local,.svc,.us-east-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-jiazha3-proxy.qe.devcluster.openshift.com,api.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-0.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-1.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-2.qe-jiazha3-proxy.qe.devcluster.openshift.com,localhost,test.no-proxy.com
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imagePullPolicy: IfNotPresent
--
    - name: HTTP_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-77-137.us-east-2.compute.amazonaws.com:3129
    - name: HTTPS_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-77-137.us-east-2.compute.amazonaws.com:3129
    - name: NO_PROXY
      value: .cluster.local,.svc,.us-east-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-jiazha3-proxy.qe.devcluster.openshift.com,api.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-0.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-1.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-2.qe-jiazha3-proxy.qe.devcluster.openshift.com,localhost,test.no-proxy.com
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imagePullPolicy: IfNotPresent
--
    - name: HTTP_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-77-137.us-east-2.compute.amazonaws.com:3129
    - name: HTTPS_PROXY
      value: http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-77-137.us-east-2.compute.amazonaws.com:3129
    - name: NO_PROXY
      value: .cluster.local,.svc,.us-east-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.qe-jiazha3-proxy.qe.devcluster.openshift.com,api.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-0.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-1.qe-jiazha3-proxy.qe.devcluster.openshift.com,etcd-2.qe-jiazha3-proxy.qe.devcluster.openshift.com,localhost,test.no-proxy.com
    image: quay.io/coreos/etcd-operator@sha256:66a37fd61a06a43969854ee6d3e21087a98b93838e284a6086b13917f96b0d9b
    imagePullPolicy: IfNotPresent

Comment 8 Abu Kashem 2019-09-16 17:11:31 UTC
Hi Jian,
I tested on your cluster ( with global proxy ). I executed the following steps:
1. Create a namespace `test`
2. Create an OperatorGroup that targets `test` namespace.
3. Create a Subscription with no config
4. Wait for the pod to be in running state, the globl proxy env vars are injected.
5. Update the Subscription with config 

YAML files:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: test
  namespace: test
spec:
  targetNamespaces:
  - test
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd-config-test
  namespace: test
spec:
  channel: singlenamespace-alpha
  installPlanApproval: Automatic
  name: etcd
  source: community-operators
  sourceNamespace: openshift-marketplace


Update the Subscription as follows:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: etcd-config-test
  namespace: test
spec:
  config:
    env:
    - name: HTTP_PROXY
  channel: singlenamespace-alpha
  installPlanApproval: Automatic
  name: etcd
  source: community-operators
  sourceNamespace: openshift-marketplace


I was able to reproduce the issue, After I applied the updated subscription, the changes were not picked up. 

However, I was able to work around this issue by manually updating the 'etcd-operator' Deployment spec ( I added a new field to the `annotations` of spec.template.annotations of the Deployment ). New Pods came up without the global proxy env vars injected. The new pods were healthy.

But, the new pod is injected with the empty proxy env var from subscription config.
        env:
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: HTTP_PROXY


In summary, we have two issues:
- If a subscription config is updated to override cluster proxy env variable(s), the change is not picked up by OLM.
- Empty proxy env var specified in subscription config gets injected to generated Deployment object. 

 
I don't think this is release blocker, given that we have a way ( somewhat cumbersome ) to work around the issue. This can be addressed as a Z stream fix in 4.2, I believe.

Comment 10 Dan Geoffroy 2019-09-17 13:15:38 UTC
Moving to 4.3.  Understand the issue but not critical enough to be considered a 4.2 release blocker.  Will continue looking into this to deliver early in the 4.3 release timeframe and potentially ship under 4.2.z.

Comment 21 Jian Zhang 2020-03-06 08:15:28 UTC
Hi, Danile

> Our default behavior is that if the global proxy object is configured and the user sets one of OLM will do nothing when reconciling the deployment.

And, as described in that doc: "If the global proxy object is set and at least one of HTTPS_PROXY, HTTP_PROXY, NO_PROXY are set on the Subscription
Then do nothing different. Global proxy config has been overridden by a user."

Sorry, I'm confused, do you mean the HTTPS_PROXY, HTTP_PROXY, NO_PROXY are a whole? When just set the HTTP_PROXY, it also means set the HTTPS_PROXY and NO_PROXY with none? 
What the "Then do nothing different." means? Based on step 5 above, the deployment had been changed when setting one of HTTPS_PROXY, HTTP_PROXY, NO_PROXY are set on the Subscription.


> The first deployment has HTTP_PROXY proxy set to none, it is ignored. When the variable is updated you expect OLM to recreate the deployment with the new env var value - I'm not entirely sure if this is the intended behavior. Could you confirm that enough time passed for an OLM sync cycle to occur (~15 minutes)?

See step "7, Recreate the subscription with the non-empty config.", I recreate it, not update it. I'm sure that the deployment had been deleted before recreating it.
Yes, after waiting for a little more time, it works. As follows:

mac:~ jianzhang$ oc create -f sub-tsb-44.yaml
subscription.operators.coreos.com/openshifttemplateservicebroker created
mac:~ jianzhang$ date
Fri Mar  6 14:02:56 CST 2020
mac:~ jianzhang$ cat sub-tsb-44.yaml 
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: openshifttemplateservicebroker
  namespace: openshift-template-service-broker
spec:
  config:
  env:
  - name: HTTP_PROXY
    value: test_http
  channel: "4.4"
  installPlanApproval: Automatic
  name: openshifttemplateservicebroker
  source: qe-app-registry
  sourceNamespace: openshift-marketplace

mac:~ jianzhang$ date
Fri Mar  6 14:57:07 CST 2020

mac:~ jianzhang$ oc get deployment  openshift-template-service-broker-operator -o json |jq '.spec.template.spec.containers[0].env'
[
  {
    "name": "IMAGE",
    "value": "image-registry.openshift-image-registry.svc:5000/openshift/ose-template-service-broker:v4.4.0"
  },
  {
    "name": "OPERATOR_NAME",
    "value": "openshift-template-service-broker-operator"
  },
  {
    "name": "POD_NAME",
    "valueFrom": {
      "fieldRef": {
        "apiVersion": "v1",
        "fieldPath": "metadata.name"
      }
    }
  },
  {
    "name": "WATCH_NAMESPACE",
    "valueFrom": {
      "fieldRef": {
        "apiVersion": "v1",
        "fieldPath": "metadata.namespace"
      }
    }
  },
  {
    "name": "HTTP_PROXY",
    "value": "test_http"
  }
]

> so why not delete the old subscription and csv and create the new one with the updated proxy? 

I guess most of the customers updating them on subscription instead of recreating it. We cannot make sure the customers must recreating them instead of updating them. Besides, recreating means the service must be interrupted, it's not a good solution.

Comment 24 yhui 2020-03-19 08:13:42 UTC
Description of problem:
Based on the comments 22, I tested the bug again.
If the behavior is happening as designed/documented according to comments 22, I think the bug can be changed to verified.


Version-Release number of selected component (if applicable):
cluster version is 4.5.0-0.nightly-2020-03-17-225152.
$ oc exec -n openshift-operator-lifecycle-manager catalog-operator-69b7cd5db9-mjnq7 -- olm --version
OLM version: 0.14.2
git commit: 3455a009647abeb4f1791b3539a9a660411b8895


Steps to Reproduce:
1. Create the cluster with proxy enabled.
2. Subscribe an operator without any proxy config.
3. Check the proxy of the pods. The cluster global proxy is injected to pod.
  
  {
    "name": "HTTP_PROXY",
    "value": "http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-157-37.us-east-2.compute.amazonaws.com:3128"
  },
  {
    "name": "HTTPS_PROXY",
    "value": "http://proxy-user1:JYgU8qRZV4DY4PXJbxJK@ec2-3-17-157-37.us-east-2.compute.amazonaws.com:3128"
  },
  {
    "name": "NO_PROXY",
    "value": ".cluster.local,.svc,.us-east-2.compute.internal,10.0.0.0/16,10.128.0.0/14,127.0.0.1,169.254.169.254,172.30.0.0/16,api-int.yhui-0318proxy.qe.devcluster.openshift.com,etcd-0.yhui-0318proxy.qe.devcluster.openshift.com,etcd-1.yhui-0318proxy.qe.devcluster.openshift.com,etcd-2.yhui-0318proxy.qe.devcluster.openshift.com,localhost,test.no-proxy.com"
  }
   
4. Update the subscription with empty proxy config.
   
   config:
    env:
    - name: HTTP_PROXY
   
5. Check the proxy of the pods. The empty proxy config is updated to pod.
  
  {
    "name": "HTTP_PROXY"
  }
  
6. Delete the sub and csv. 

7. Create a subscription with below empty proxy config.
   
   config:
    env:
    - name: HTTP_PROXY
   
8. Check the proxy of the pods after about 10 minutes, the empty proxy config is injected to pod.
  
  {
    "name": "HTTP_PROXY"
  }
  
9. Delete the sub and csv. 
10. Create a subscription with below non-empty proxy config.
   
   config:
    env:
    - name: HTTP_PROXY
      value: test_http
   
11. Check the proxy of the pods after about 10 minutes, the non-empty proxy config is injected to pod.
  
  {
    "name": "HTTP_PROXY",
    "value": "test_http"
  }
  


Since the results are as designed based on comments 22, I change the status to verified.

Comment 26 errata-xmlrpc 2020-05-04 11:13:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581