Bug 1618311 - [oc cluster up behind proxy] Failed to install "openshift-web-console-operator": timed out waiting for the condition.
Summary: [oc cluster up behind proxy] Failed to install "openshift-web-console-operato...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.10.z
Assignee: Joel Smith
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-08-16 12:57 UTC by Praveen Kumar
Modified: 2018-10-08 18:06 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-10-08 12:58:06 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Praveen Kumar 2018-08-16 12:57:12 UTC
Description of problem:
oc cluster up for ocp-3.10 not working as expected behind the proxy.

Version-Release number of selected component (if applicable):
$ ./oc version
oc v3.10.14

How reproducible:

$ ./oc cluster up --http-proxy http://myproxy:3128 --no-proxy localhost,127.0.0.1,172.30.1.1
Getting a Docker client ...
Checking if image openshift/origin-control-plane:v3.10 is available ...
Pulling image openshift/origin-control-plane:v3.10
Image pull complete
Pulling image openshift/origin-cli:v3.10
Pulled 1/4 layers, 28% complete
Pulled 2/4 layers, 51% complete
Pulled 3/4 layers, 80% complete
Pulled 4/4 layers, 100% complete
[...]
I0801 17:31:31.838764    3644 interface.go:41] Finished installing "sample-templates/mariadb" "sample-templates/postgresql" "sample-templates/cakephp quickstart" "sample-templates/dancer quickstart" "sample-templates/sample pipeline" "sample-templates/mongodb" "sample-templates/django quickstart" "sample-templates/nodejs quickstart" "sample-templates/rails quickstart" "sample-templates/jenkins pipeline ephemeral" "sample-templates/mysql"
E0801 17:36:30.854286    3644 interface.go:34] Failed to install "openshift-web-console-operator": timed out waiting for the condition
I0801 17:36:30.854328    3644 interface.go:41] Finished installing "openshift-web-console-operator" "centos-imagestreams" "openshift-image-registry" "openshift-router" "sample-templates" "persistent-volumes


Steps to Reproduce:
1. Download latest released oc client binary (3.10.14)
2. Start it with oc cluster up along with proxy.


Actual results:

E0801 17:36:30.854286    3644 interface.go:34] Failed to install "openshift-web-console-operator": timed out waiting for the condition
I0801 17:36:30.854328    3644 interface.go:41] Finished installing "openshift-web-console-operator" "centos-imagestreams" "openshift-image-registry" "openshift-router" "sample-templates" "persistent-volumes

Expected results:

should succeed. 

Additional info:

Comment 1 Juan Vallejo 2018-08-17 13:55:09 UTC
This might be caused by a failure to pull the web-console image.

Could you provide logs for the running containers, logs for the `oc cluster up` command (run the command with --loglevel=8), as well as a list of pods?

Comment 2 Praveen Kumar 2018-08-20 05:29:56 UTC
@juan Below is the logs from 'oc cluster up` along with loglevel, this is easily produced by using any proxy.


```
$ oc cluster up --http-proxy http://<my_proxy>:3128 --loglevel=8
[...]
+ oc auth reconcile --config=/kubeconfig.kubeconfig -f -
+ oc process --local -o yaml --ignore-unknown-parameters --param-file=/param-file.txt -f /install.yaml
+ oc apply --namespace=openshift-core-operators --config=/kubeconfig.kubeconfig -f -
I0820 01:17:12.517970    2000 run.go:342] Container run successful
I0820 01:17:12.517976    2000 run.go:293] Deleting container "2364ae2a428c047380dfadfb27df517b6f77a479e827f11af6793856628d4c09"
I0820 01:17:12.581766    2000 web_console_operator.go:68] polling for web-console availability ...
I0820 01:17:12.581994    2000 round_trippers.go:383] GET https://127.0.0.1:8443/apis/apps/v1/namespaces/openshift-web-console/deployments/webconsole
I0820 01:17:12.582006    2000 round_trippers.go:390] Request Headers:
I0820 01:17:12.582012    2000 round_trippers.go:393]     Accept: application/json, */*
I0820 01:17:12.582017    2000 round_trippers.go:393]     User-Agent: oc/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8
I0820 01:17:12.602004    2000 round_trippers.go:408] Response Status: 404 Not Found in 19 milliseconds
I0820 01:17:12.602025    2000 round_trippers.go:411] Response Headers:
I0820 01:17:12.602032    2000 round_trippers.go:414]     Cache-Control: no-store
I0820 01:17:12.602038    2000 round_trippers.go:414]     Content-Type: application/json
I0820 01:17:12.602043    2000 round_trippers.go:414]     Content-Length: 222
I0820 01:17:12.602049    2000 round_trippers.go:414]     Date: Mon, 20 Aug 2018 05:17:12 GMT
I0820 01:17:12.602071    2000 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"deployments.apps \"webconsole\" not found","reason":"NotFound","details":{"name":"webconsole","group":"apps","kind":"deployments"},"code":404}
I0820 01:17:13.603299    2000 web_console_operator.go:68] polling for web-console availability ...
I0820 01:17:13.603413    2000 round_trippers.go:383] GET https://127.0.0.1:8443/apis/apps/v1/namespaces/openshift-web-console/deployments/webconsole
I0820 01:17:13.603422    2000 round_trippers.go:390] Request Headers:
I0820 01:17:13.603427    2000 round_trippers.go:393]     Accept: application/json, */*
I0820 01:17:13.603431    2000 round_trippers.go:393]     User-Agent: oc/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8
I0820 01:17:13.616616    2000 round_trippers.go:408] Response Status: 404 Not Found in 13 milliseconds
I0820 01:17:13.616634    2000 round_trippers.go:411] Response Headers:
I0820 01:17:13.616639    2000 round_trippers.go:414]     Date: Mon, 20 Aug 2018 05:17:13 GMT
I0820 01:17:13.616643    2000 round_trippers.go:414]     Cache-Control: no-store
I0820 01:17:13.616648    2000 round_trippers.go:414]     Content-Type: application/json
I0820 01:17:13.616651    2000 round_trippers.go:414]     Content-Length: 222
I0820 01:17:13.616677    2000 request.go:897] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"deployments.apps \"webconsole\" not found","reason":"NotFound","details":{"name":"webconsole","group":"apps","kind":"deployments"},"code":404}
I0820 01:17:14.608542    2000 web_console_operator.go:68] polling for web-console availability ...
I0820 01:17:14.608637    2000 round_trippers.go:383] GET https://127.0.0.1:8443/apis/apps/v1/namespaces/openshift-web-console/deployments/webconsole
[...]
I0820 01:22:12.602429    2000 web_console_operator.go:68] polling for web-console availability ...
I0820 01:22:12.602674    2000 round_trippers.go:383] GET https://127.0.0.1:8443/apis/apps/v1/namespaces/openshift-web-console/deployments/webconsole
I0820 01:22:12.602705    2000 round_trippers.go:390] Request Headers:
I0820 01:22:12.602729    2000 round_trippers.go:393]     Accept: application/json, */*
I0820 01:22:12.602751    2000 round_trippers.go:393]     User-Agent: oc/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8
I0820 01:22:12.609090    2000 round_trippers.go:408] Response Status: 200 OK in 6 milliseconds
I0820 01:22:12.609173    2000 round_trippers.go:411] Response Headers:
I0820 01:22:12.609200    2000 round_trippers.go:414]     Cache-Control: no-store
I0820 01:22:12.609221    2000 round_trippers.go:414]     Content-Type: application/json
I0820 01:22:12.609242    2000 round_trippers.go:414]     Content-Length: 2924
I0820 01:22:12.609263    2000 round_trippers.go:414]     Date: Mon, 20 Aug 2018 05:22:12 GMT
I0820 01:22:12.609377    2000 request.go:897] Response Body: {"kind":"Deployment","apiVersion":"apps/v1","metadata":{"name":"webconsole","namespace":"openshift-web-console","selfLink":"/apis/apps/v1/namespaces/openshift-web-console/deployments/webconsole","uid":"4f412803-a438-11e8-b8df-52540069540b","resourceVersion":"1379","generation":1,"creationTimestamp":"2018-08-20T05:17:18Z","labels":{"app":"openshift-web-console","webconsole":"true"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"openshift-web-console","webconsole":"true"}},"template":{"metadata":{"name":"webconsole","creationTimestamp":null,"labels":{"app":"openshift-web-console","webconsole":"true"}},"spec":{"volumes":[{"name":"serving-cert","secret":{"secretName":"webconsole-serving-cert","defaultMode":400}},{"name":"webconsole-config","configMap":{"name":"webconsole-config","defaultMode":440}}],"containers":[{"name":"webconsole","image":"openshift/origin-web-console:v3.10","command":["/usr/bin/origin-web-console","--audit-log-path=-","--config= [truncated 1900 chars]
I0820 01:22:12.610239    2000 web_console_operator.go:68] polling for web-console availability ...
I0820 01:22:12.610410    2000 round_trippers.go:383] GET https://127.0.0.1:8443/apis/apps/v1/namespaces/openshift-web-console/deployments/webconsole
I0820 01:22:12.610441    2000 round_trippers.go:390] Request Headers:
I0820 01:22:12.610489    2000 round_trippers.go:393]     Accept: application/json, */*
I0820 01:22:12.610515    2000 round_trippers.go:393]     User-Agent: oc/v1.10.0+b81c8f8 (linux/amd64) kubernetes/b81c8f8
I0820 01:22:12.612814    2000 round_trippers.go:408] Response Status: 200 OK in 2 milliseconds
I0820 01:22:12.612857    2000 round_trippers.go:411] Response Headers:
I0820 01:22:12.612869    2000 round_trippers.go:414]     Cache-Control: no-store
I0820 01:22:12.612886    2000 round_trippers.go:414]     Content-Type: application/json
I0820 01:22:12.612893    2000 round_trippers.go:414]     Content-Length: 2924
I0820 01:22:12.612900    2000 round_trippers.go:414]     Date: Mon, 20 Aug 2018 05:22:12 GMT
I0820 01:22:12.612943    2000 request.go:897] Response Body: {"kind":"Deployment","apiVersion":"apps/v1","metadata":{"name":"webconsole","namespace":"openshift-web-console","selfLink":"/apis/apps/v1/namespaces/openshift-web-console/deployments/webconsole","uid":"4f412803-a438-11e8-b8df-52540069540b","resourceVersion":"1379","generation":1,"creationTimestamp":"2018-08-20T05:17:18Z","labels":{"app":"openshift-web-console","webconsole":"true"},"annotations":{"deployment.kubernetes.io/revision":"1"}},"spec":{"replicas":1,"selector":{"matchLabels":{"app":"openshift-web-console","webconsole":"true"}},"template":{"metadata":{"name":"webconsole","creationTimestamp":null,"labels":{"app":"openshift-web-console","webconsole":"true"}},"spec":{"volumes":[{"name":"serving-cert","secret":{"secretName":"webconsole-serving-cert","defaultMode":400}},{"name":"webconsole-config","configMap":{"name":"webconsole-config","defaultMode":440}}],"containers":[{"name":"webconsole","image":"openshift/origin-web-console:v3.10","command":["/usr/bin/origin-web-console","--audit-log-path=-","--config= [truncated 1900 chars]
E0820 01:22:12.613149    2000 interface.go:34] Failed to install "openshift-web-console-operator": timed out waiting for the condition
I0820 01:22:12.613169    2000 interface.go:41] Finished installing "openshift-router" "persistent-volumes" "openshift-image-registry" "sample-templates" "openshift-web-console-operator" "centos-imagestreams"
Error: timed out waiting for the condition

```

Comment 3 David Eads 2018-08-29 18:19:49 UTC
We need a few things from comment 1:

 1. Could you provide logs for the running containers, - missing
 2. logs for the `oc cluster up` command (run the command with --loglevel=8) - this was provided and points to a pod problem
 3. as well as a list of pods? - missing. use -oyaml

additionally:
 4. a dump of all events.  use -oyaml

Comment 4 Maciej Szulik 2018-08-30 10:30:19 UTC
The problems seems to be with proxy setting, notice this event for webconsole pod:

Readiness probe failed: Get https://172.17.0.8:8443/healthz: proxyconnect tcp: dial tcp 192.168.122.60:8080: getsockopt: connection refused

And the aforementioned readiness probe looks like this:

readinessProbe:
  failureThreshold: 3
  httpGet:
    path: /healthz
    port: 8443
    scheme: HTTPS

Liveness probe does not have that problem because it executes following command:

livenessProbe:
  exec:
    command:
    - /bin/sh
    - -i
    - -c
    - |-
      if [[ ! -f /tmp/webconsole-config.hash ]]; then \
        md5sum /var/webconsole-config/webconsole-config.yaml > /tmp/webconsole-config.hash; \
      elif [[ $(md5sum /var/webconsole-config/webconsole-config.yaml) != $(cat /tmp/webconsole-config.hash) ]]; then \
        exit 1; \
      fi && curl -k -f https://0.0.0.0:8443/console/
  failureThreshold: 3
  periodSeconds: 10
  successThreshold: 1
  timeoutSeconds: 1


I've investigate the pod and I haven't seen the proxy being injected into it. I've manually invoked curl using that 172.17.0.8 address and it succeeded. So it appears that kubelet for some reason is picking this proxy and using it for readiness checks.

I was able to bypass the problem by specifying --no-proxy=172.17.0.8 but I'm guessing that in [1] we should try returning not just docker IP, but rather subnetwork. David opinions?

[1] https://github.com/openshift/origin/blob/5686cdc2f4224838295f49e7cc7361c5f3b00052/pkg/oc/clusterup/up.go#L950

Comment 5 Maciej Szulik 2018-08-30 11:15:08 UTC
I've checked several options (172.17.0.0 and 172.17.0.0/16) for --no-proxy and none worked. You need to explicitly set a particular address for this to work. 

I wonder if this is a problem with kubelet, maybe?

Comment 6 David Eads 2018-08-30 12:00:33 UTC
It's just oc cluster up.  We could simply delete the readiness check.

It's worth asking seth about the kubelet proxy configuration.

Comment 12 Maciej Szulik 2018-09-13 10:04:52 UTC
The proposed fixes from previous comment didn't fully solve the problem. Since we have multiple places where liveness and readiness probes are defined using httpGet we are moving this bug to Pod team since it clearly is a problem with kubelet which is not properly ignoring proxy settings for communication with local pods. 
We (as in master team) also agree that we don't want to create temporary hacks which will bypass the problem by using exec commands for these probes, since there are too many and more are on the way. We'd rather see a proper fix in place.

Comment 13 Praveen Kumar 2018-09-17 15:50:42 UTC
Do we have any process/pr on this, we can also do early testing of the patch if required. Our CDK release is very near and 3.10 is going to be default for us, most of our corporate users are behind the proxy and if that not works properly then those users not able to play around with 3.10 :(. Also does we have same issue with 3.11 also (I didn't test yet but will do tomorrow and report back).

Comment 16 Joel Smith 2018-09-20 15:47:15 UTC
Please try this work-around and report back:

./oc cluster up --http-proxy http://myproxy:3128 --no-proxy $(docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge)

No need to add localhost, 127.0.0.1, et.al. as those get added automatically.

Comment 19 Praveen Kumar 2018-09-24 07:20:09 UTC
@Joel Another thing which we discovered when deploying the application on the running cluster is `s2i` builder image doesn't have the proxy information so this workaround only fix one issue but still something missing.

Have look to below env variable of builder container when running 3.9.0 and 3.10.0

In case of 3.9.0 which have proxy variable.

```
# env
SOURCE_URI=https://github.com/openshift/ruby-ex.git
HOSTNAME=ruby-ex-1-build
RUBY_EX_SERVICE_PORT_8080_TCP=8080
KUBERNETES_PORT=tcp://172.30.0.1:443
KUBERNETES_PORT_443_TCP_PORT=443
TERM=xterm
OPENSHIFT_CONTAINERIZED=true
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=172.30.0.1
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
RUBY_EX_PORT_8080_TCP_ADDR=172.30.202.127
KUBERNETES_PORT_53_TCP_PORT=53
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
KUBERNETES_SERVICE_PORT_DNS=53
KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig
RUBY_EX_PORT_8080_TCP_PORT=8080
ALLOWED_UIDS=1-
PUSH_DOCKERCFG_PATH=/var/run/secrets/openshift.io/push
RUBY_EX_SERVICE_HOST=172.30.202.127
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
DROP_CAPS=KILL,MKNOD,SETGID,SETUID
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
BUILD={"kind":"Build","apiVersion":"v1","metadata":{"name":"ruby-ex-1","namespace":"myproject","selfLink":"/apis/build.openshift.io/v1/namespaces/myproject/builds/ruby-ex-1","uid":"0b8631a5-bfc9-11e8-aaf7-2a459d1779aa","resourceVersion":"1540","creationTimestamp":"2018-09-24T07:11:23Z","labels":{"app":"ruby-ex","buildconfig":"ruby-ex","openshift.io/build-config.name":"ruby-ex","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"ruby-ex","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"ruby-ex","uid":"0b722313-bfc9-11e8-aaf7-2a459d1779aa","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/ruby-ex.git","ref":"master","httpProxy":"http://10.65.193.158:3128","noProxy":"localhost,127.0.0.1,172.30.1.1,127.0.0.1,192.168.64.29,localhost,172.30.1.1,172.30.1.2,172.30.0.0/8"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"docker.io/centos/ruby-24-centos7@sha256:d6529b7680fa77f27592de43a86342bf21bdfa066f7eb8d50e5d8ae9b9a7d141"},"env":[{"name":"NO_PROXY","value":"localhost,127.0.0.1,172.30.1.1,127.0.0.1,192.168.64.29,localhost,172.30.1.1,172.30.1.2,172.30.0.0/8"},{"name":"no_proxy","value":"localhost,127.0.0.1,172.30.1.1,127.0.0.1,192.168.64.29,localhost,172.30.1.1,172.30.1.2,172.30.0.0/8"},{"name":"HTTP_PROXY","value":"http://10.65.193.158:3128"},{"name":"http_proxy","value":"http://10.65.193.158:3128"},{"name":"HTTPS_PROXY"},{"name":"https_proxy"}]}},"output":{"to":{"kind":"DockerImage","name":"172.30.1.1:5000/myproject/ruby-ex:latest"},"pushSecret":{"name":"builder-dockercfg-zjhhh"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"docker.io/centos/ruby-24-centos7@sha256:d6529b7680fa77f27592de43a86342bf21bdfa066f7eb8d50e5d8ae9b9a7d141","fromRef":{"kind":"ImageStreamTag","namespace":"openshift","name":"ruby:2.4"}}}]},"status":{"phase":"New","outputDockerImageReference":"172.30.1.1:5000/myproject/ruby-ex:latest","config":{"kind":"BuildConfig","namespace":"myproject","name":"ruby-ex"},"output":{}}}

PWD=/var/lib/origin
RUBY_EX_PORT_8080_TCP=tcp://172.30.202.127:8080
RUBY_EX_PORT=tcp://172.30.202.127:8080
LANG=en_US.UTF-8
KUBERNETES_PORT_53_UDP_PORT=53
SOURCE_REF=master
SOURCE_REPOSITORY=https://github.com/openshift/ruby-ex.git
SHLVL=1
HOME=/root
KUBERNETES_PORT_53_UDP_PROTO=udp
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
ORIGIN_VERSION=v3.9.0+71543b2-33
RUBY_EX_SERVICE_PORT=8080
LESSOPEN=||/usr/bin/lesspipe.sh %s
KUBERNETES_PORT_53_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
RUBY_EX_PORT_8080_TCP_PROTO=tcp
_=/usr/bin/env

```


In case of 3.10 which doesn't have

```
# env
SOURCE_URI=https://github.com/openshift/django-ex.git
HOSTNAME=django-ex-1-build
DJANGO_EX_PORT_8080_TCP_ADDR=172.30.190.91
RUBY_EX_SERVICE_PORT_8080_TCP=8080
DJANGO_EX_PORT_8080_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT=tcp://172.30.0.1:443
TERM=xterm
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_HOST=172.30.0.1
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
RUBY_EX_PORT_8080_TCP_ADDR=172.30.252.3
KUBERNETES_PORT_53_TCP_PORT=53
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
LS_COLORS=rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=01;05;37;41:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=01;36:*.au=01;36:*.flac=01;36:*.mid=01;36:*.midi=01;36:*.mka=01;36:*.mp3=01;36:*.mpc=01;36:*.ogg=01;36:*.ra=01;36:*.wav=01;36:*.axa=01;36:*.oga=01;36:*.spx=01;36:*.xspf=01;36:
KUBERNETES_SERVICE_PORT_DNS=53
DJANGO_EX_SERVICE_HOST=172.30.190.91
KUBECONFIG=/var/lib/origin/openshift.local.config/master/admin.kubeconfig
RUBY_EX_PORT_8080_TCP_PORT=8080
ALLOWED_UIDS=1-
PUSH_DOCKERCFG_PATH=/var/run/secrets/openshift.io/push
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
RUBY_EX_SERVICE_HOST=172.30.252.3
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
DROP_CAPS=KILL,MKNOD,SETGID,SETUID
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
BUILD={"kind":"Build","apiVersion":"v1","metadata":{"name":"django-ex-1","namespace":"myproject","selfLink":"/apis/build.openshift.io/v1/namespaces/myproject/builds/django-ex-1","uid":"04a5e900-bfc8-11e8-9059-06f51ac1ee93","resourceVersion":"2682","creationTimestamp":"2018-09-24T07:04:02Z","labels":{"app":"django-ex","buildconfig":"django-ex","openshift.io/build-config.name":"django-ex","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"django-ex","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"django-ex","uid":"0484e540-bfc8-11e8-9059-06f51ac1ee93","controller":true}]},"spec":{"serviceAccount":"builder","source":{"type":"Git","git":{"uri":"https://github.com/openshift/django-ex.git","ref":"master"}},"strategy":{"type":"Source","sourceStrategy":{"from":{"kind":"DockerImage","name":"172.30.1.1:5000/openshift/python@sha256:51cf14c1d1491c5ab0e902c52740c22d4fff52f95111b97d195d12325a426350"},"pullSecret":{"name":"builder-dockercfg-5h277"}}},"output":{"to":{"kind":"DockerImage","name":"172.30.1.1:5000/myproject/django-ex:latest"},"pushSecret":{"name":"builder-dockercfg-5h277"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Build configuration change"}]},"status":{"phase":"New","outputDockerImageReference":"172.30.1.1:5000/myproject/django-ex:latest","config":{"kind":"BuildConfig","namespace":"myproject","name":"django-ex"},"output":{}}}

PWD=/var/lib/origin
RUBY_EX_PORT_8080_TCP=tcp://172.30.252.3:8080
RUBY_EX_PORT=tcp://172.30.252.3:8080
LANG=en_US.UTF-8
DJANGO_EX_PORT_8080_TCP_PORT=8080
KUBERNETES_PORT_53_UDP_PORT=53
SOURCE_REF=master
DJANGO_EX_SERVICE_PORT_8080_TCP=8080
DJANGO_EX_PORT_8080_TCP=tcp://172.30.190.91:8080
SOURCE_REPOSITORY=https://github.com/openshift/django-ex.git
SHLVL=1
HOME=/root
KUBERNETES_PORT_53_UDP_PROTO=udp
DJANGO_EX_SERVICE_PORT=8080
KUBERNETES_PORT_443_TCP_PROTO=tcp
DJANGO_EX_PORT=tcp://172.30.190.91:8080
KUBERNETES_SERVICE_PORT_HTTPS=443
RUBY_EX_SERVICE_PORT=8080
LESSOPEN=||/usr/bin/lesspipe.sh %s
PULL_DOCKERCFG_PATH=/var/run/secrets/openshift.io/pull
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
RUBY_EX_PORT_8080_TCP_PROTO=tcp
```

Comment 20 Seth Jennings 2018-09-29 18:17:51 UTC
This is really a different issue that the original bug report.  Please open a new bug against Build component for this issue.

Sending to QE.

QE note: please verify that the cluster control plane (apiserver, controllers, kublet, and, in this case, web-console) will come up with the _PROXY env vars set.  Do not test s2i builds as comment 19 indicates they do not currently honor the _PROXY env vars.

Comment 21 weiwei jiang 2018-09-30 04:50:37 UTC
Checked with 
# oc version 
oc v3.10.51
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://127.0.0.1:8443
openshift v3.10.45
kubernetes v1.10.0+b81c8f8

# oc cluster up --http-proxy http://127.0.0.1:8888 --no-proxy $(docker network inspect -f "{{range .IPAM.Config }}{{ .Subnet }}{{end}}" bridge) --loglevel=8
...
Server Information ...
OpenShift server started.

The server is accessible via web console at:
    https://127.0.0.1:8443

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin


And all control-plane work well:
# oc get pods --all-namespaces
NAMESPACE                      NAME                                              READY     STATUS      RESTARTS   AGE
default                        docker-registry-1-qkmrr                           1/1       Running     0          7m
default                        persistent-volume-setup-vv6rb                     0/1       Completed   0          8m
default                        router-1-db6qq                                    1/1       Running     0          7m
kube-dns                       kube-dns-lncr4                                    1/1       Running     0          9m
kube-proxy                     kube-proxy-864bh                                  1/1       Running     0          9m
kube-system                    kube-controller-manager-localhost                 1/1       Running     0          8m
kube-system                    kube-scheduler-localhost                          1/1       Running     0          9m
kube-system                    master-api-localhost                              1/1       Running     0          9m
kube-system                    master-etcd-localhost                             1/1       Running     0          8m
myproject                      h-1-w6drs                                         1/1       Running     0          2m
openshift-apiserver            openshift-apiserver-bww95                         1/1       Running     0          9m
openshift-controller-manager   openshift-controller-manager-cxk8q                1/1       Running     0          8m
openshift-core-operators       openshift-web-console-operator-5d56cc4cb4-z27z9   1/1       Running     0          8m
openshift-web-console          webconsole-6d5d5d6867-vn2bq                       1/1       Running     0          7m

Comment 22 Praveen Kumar 2018-10-03 06:53:13 UTC
(In reply to Seth Jennings from comment #20)
> This is really a different issue that the original bug report.  Please open
> a new bug against Build component for this issue.

https://bugzilla.redhat.com/show_bug.cgi?id=1635513

> 
> Sending to QE.
> 
> QE note: please verify that the cluster control plane (apiserver,
> controllers, kublet, and, in this case, web-console) will come up with the
> _PROXY env vars set.  Do not test s2i builds as comment 19 indicates they do
> not currently honor the _PROXY env vars.


Note You need to log in before you can comment on or make changes to this bug.