Description of problem: Config the OpenShift with proxy, the image pullthrought function didn't work. the docker-registry can't fetch requested blob from a remote registry. Version-Release number of selected component (if applicable): openshift v3.4.0.32+d349492 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 ose-docker-registry:v3.4.0.32 Version: 1.12.3 How reproducible: always Steps to Reproduce: 1. Set up OpenShift with global proxy behind proxy (https://docs.openshift.com/container-platform/3.3/install_config/install/advanced_install.html#advanced-install-configuring-global-proxy), Note: you must make sure your instances can't access internet directly for reproducing the bug. 2. Login OpenShift and create project; 3. Tag a image to create imagestream `oc tag busybox mybusybox:v1` 4. Check the docker-registry with pullthrought=true; 5. Grant registry-admin role to system:anonymous; 6. Try to pull the remote image with docker-registry `docker pull 172.30.152.58:5000/zhouy/mybox:v1` 7. Edit dc of docker-registry to add "HTTP_PROXY" and "HTTPS_PROXY" env: - name: HTTP_PROXY value: http://xxxx.redhat.com:3128 - name: HTTPS_PROXY value: http://xxxx.redhat.com:3128 8. Repeat step 6 9. Recover step7, then make sure the instance running docker-registry pod is able to access the internet directly. 10. Repeat step 6 Actual results: 6. Pull image failed, with error from docker-registry: time="2016-12-06T05:52:51.582206494Z" level=debug msg="could not stat layer link \"sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190\" in repository \"zhouy/mybox\": unknown blob" go.version=go1.7.3 http.request.host="172.30.152.58:5000" http.request.id=b9938fcc-68e1-4a75-8a7e-0d0db332d906 http.request.method=GET http.request.remoteaddr="10.129.0.1:37716" http.request.uri="/v2/zhouy/mybox/blobs/sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" http.request.useragent="docker/1.12.3 go/go1.6.2 git-commit/8b91553-redhat kernel/3.10.0-514.2.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.3 \\(linux\\))" instance.id=92cc8a48-b749-49fd-bd33-15bbc1408d32 vars.digest="sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" vars.name="zhouy/mybox" time="2016-12-06T05:52:51.604679828Z" level=debug msg="swift.Stat(\"/docker/registry/v2/blobs/sha256/56/56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190/data\")" go.version=go1.7.3 http.request.host="172.30.152.58:5000" http.request.id=b9938fcc-68e1-4a75-8a7e-0d0db332d906 http.request.method=GET http.request.remoteaddr="10.129.0.1:37716" http.request.uri="/v2/zhouy/mybox/blobs/sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" http.request.useragent="docker/1.12.3 go/go1.6.2 git-commit/8b91553-redhat kernel/3.10.0-514.2.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.3 \\(linux\\))" instance.id=92cc8a48-b749-49fd-bd33-15bbc1408d32 trace.duration=22.301016ms trace.file="/builddir/build/BUILD/atomic-openshift-git-0.d349492/_output/local/go/src/github.com/openshift/origin/vendor/github.com/docker/distribution/registry/storage/driver/base/base.go" trace.func="github.com/openshift/origin/vendor/github.com/docker/distribution/registry/storage/driver/base.(*Base).Stat" trace.id=c369c8b9-5056-4335-8374-43a2123a7e84 trace.line=137 vars.digest="sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" vars.name="zhouy/mybox" time="2016-12-06T05:52:51.623011847Z" level=info msg="Trying to stat \"sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190\" from \"docker.io/library/busybox:latest\"" go.version=go1.7.3 http.request.host="172.30.152.58:5000" http.request.id=b9938fcc-68e1-4a75-8a7e-0d0db332d906 http.request.method=GET http.request.remoteaddr="10.129.0.1:37716" http.request.uri="/v2/zhouy/mybox/blobs/sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" http.request.useragent="docker/1.12.3 go/go1.6.2 git-commit/8b91553-redhat kernel/3.10.0-514.2.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.3 \\(linux\\))" instance.id=92cc8a48-b749-49fd-bd33-15bbc1408d32 vars.digest="sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" vars.name="zhouy/mybox" 10.129.0.1 - - [06/Dec/2016:05:52:57 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1" 10.129.0.1 - - [06/Dec/2016:05:52:57 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1" time="2016-12-06T05:53:06.623459296Z" level=error msg="Error getting remote repository for image \"docker.io/library/busybox:latest\": Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" go.version=go1.7.3 http.request.host="172.30.152.58:5000" http.request.id=b9938fcc-68e1-4a75-8a7e-0d0db332d906 http.request.method=GET http.request.remoteaddr="10.129.0.1:37716" http.request.uri="/v2/zhouy/mybox/blobs/sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" http.request.useragent="docker/1.12.3 go/go1.6.2 git-commit/8b91553-redhat kernel/3.10.0-514.2.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.3 \\(linux\\))" instance.id=92cc8a48-b749-49fd-bd33-15bbc1408d32 vars.digest="sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" vars.name="zhouy/mybox" time="2016-12-06T05:53:06.623753672Z" level=error msg="response completed with error" err.code="blob unknown" err.detail=sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190 err.message="blob unknown to registry" go.version=go1.7.3 http.request.host="172.30.152.58:5000" http.request.id=b9938fcc-68e1-4a75-8a7e-0d0db332d906 http.request.method=GET http.request.remoteaddr="10.129.0.1:37716" http.request.uri="/v2/zhouy/mybox/blobs/sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" http.request.useragent="docker/1.12.3 go/go1.6.2 git-commit/8b91553-redhat kernel/3.10.0-514.2.2.el7.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/1.12.3 \\(linux\\))" http.response.contenttype="application/json; charset=utf-8" http.response.duration=15.060936591s http.response.status=404 http.response.written=157 instance.id=92cc8a48-b749-49fd-bd33-15bbc1408d32 vars.digest="sha256:56bec22e355981d8ba0878c6c2f23b21f422f30ab0aba188b54f1ffeff59c190" vars.name="zhouy/mybox" 8. # docker pull 172.30.152.58:5000/zhouy/mybox:v1 Trying to pull repository 172.30.152.58:5000/zhouy/mybox ... error parsing HTTP 400 response body: unexpected end of JSON input: "" 10. "docker pull" succeed # docker pull 172.30.152.58:5000/zhouy/mybox:v1 Trying to pull repository 172.30.152.58:5000/zhouy/mybox ... sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912: Pulling from 172.30.152.58:5000/zhouy/mybox 56bec22e3559: Pull complete Digest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 Status: Downloaded newer image for 172.30.152.58:5000/zhouy/mybox:v1 Expected results: 6. The image pullthrought function should work well with proxy and pull the remote image succeed. Additional info:
Adding upcoming release as this is not a regression. We have to look at this for 3.5.
We have to check if swift driver is taking proxy into account.
The swift driver supports proxy environment variables: https://github.com/openshift/origin/blob/master/vendor/github.com/ncw/swift/swift.go#L79-L82
After step 1, there is set a HTTP_PROXY in dc ? Show the output of the command: $ oc env -n default --list dc/docker-registry |grep _PROXY If you don't have HTTP_PROXY there atfer step 1 then step 6 will not work.
After step 7, there is a HTTP_PROXY in dc , the step 6 not work. The default install not set the HTTP_PROXY in dc.
(In reply to zhou ying from comment #5) > After step 7, there is a HTTP_PROXY in dc , the step 6 not work. > The default install not set the HTTP_PROXY in dc. Can you make sure the DC redeployed with the right environment variables? Can you verify the environment variables are set on the registry pod after you changed the DC?
You have to set the NO_PROXY environment variable to the API server internal IP address. To get this IP you can do: 1) Get the name of the registry pod ;-) $ oc exec [REGISTRY_POD] -n default --as=system:admin -- /bin/env | grep KUBERNETES_SERVICE_HOST KUBERNETES_SERVICE_HOST=172.30.0.1 2) Use: $ oc set env dc/docker-registry -n default HTTP_PROXY="http://xx:3128" \ HTTPS_PROXY="http://xxxxx:3128" NO_PROXY=172.30.0.1 --as=system:admin (The NO_PROXY is set to the KUBERNETES_SERVICE_HOST). Now repeat the step 8) and the image should be pulled: $ docker pull 172.30.66.57:5000/test/mybusybox:v1 Trying to pull repository 172.30.66.57:5000/test/mybusybox ... sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e: Pulling from 172.30.66.57:5000/test/mybusybox 4b0bc1c4050b: Pull complete Digest: sha256:817a12c32a39bbe394944ba49de563e085f1d3c5266eb8e9723256bc4448680e If this work, I guess we can fix it by updating documentation and mentioning this. Will that work?
After talking to Clayton, we should be setting NO_PROXY to KUBERNETES_SERVICE_HOST by default for entire cluster (in ansible) and we should never proxy the requests to service network. Moving to installer component.
So we need to explicitly set those values when creating the router and registry? Should these not propagate from the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY values set in the node's environment? It's pretty tedius to audit the entire stack for places we could be setting proxy env vars.
Yes, when set the NO_PROXY=172.30.0.1 , the registry works well . thanks ! oc set env dc/docker-registry HTTP_PROXY="http://xxx:3128" HTTPS_PROXY="http://xxx:3128" NO_PROXY=172.30.0.1 --as=system:admin deploymentconfig "docker-registry" updated oc env po docker-registry-22-cdqxs --list # pods docker-registry-22-cdqxs, container registry ...... HTTPS_PROXY=http://xxx:3128 HTTP_PROXY=http://xxx:3128 NO_PROXY=172.30.0.1
Michal Fojtik: Just please response for https://bugzilla.redhat.com/show_bug.cgi?id=1401831#c11.
(In reply to Scott Dodson from comment #11) > So we need to explicitly set those values when creating the router and > registry? Should these not propagate from the HTTP_PROXY, HTTPS_PROXY, and > NO_PROXY values set in the node's environment? It's pretty tedius to audit > the entire stack for places we could be setting proxy env vars. I think there are three options we can do: 1) Set the NO_PROXY for registry to point to KUBERNETES_SERVICE_HOST during the installation. 2) Set the NO_PROXY for router and registry using the `oc adm` command automatically 3) Do nothing, just document that when you're running with proxy turned on,you have to make sure the NO_PROXY is set properly.
Is there any harm in setting NO_PROXY=${NO_PROXY}:${KUBERNETES_SERVICE_HOST} in the docker-registry invocation so it's always in the NO_PROXY list when the process starts? We'd still need to defined HTTP_PROXY and HTTPS_PROXY if the environment is actually configured for a proxy. What use cases necessitate the router being proxy aware?
*** This bug has been marked as a duplicate of bug 1459102 ***
*** Bug 1459102 has been marked as a duplicate of this bug. ***
https://github.com/openshift/openshift-ansible/pull/5148 is the relevant PR for master branch
Confirmed with latest ansible, the issue has fixed: # oc env po docker-registry-1-vhl2c --list # pods docker-registry-1-vhl2c, container registry ..... NO_PROXY=.cluster.local,.lab.sjc.redhat.com,.svc,10.1.0.0/16,172.30.0.0/16,192.168.2.172,yyyyy.com HTTP_PROXY=http://xxxxx:3128 HTTPS_PROXY=http://xxxxx:3128 .....
It has been verified with openshift-ansible master branch (the last commit id is a1561ed) At this moment, moving to `MODIFIED` as the fix isn't in latest openshift-ansible rpm package.
[root@openshift-135 ~]# oc env po docker-registry-1-1x3jr --list # pods docker-registry-1-1x3jr, container registry ..... NO_PROXY=.cluster.local,.svc,192.168.2.211,openshift...redhat.com HTTP_PROXY=http://file.rdu.redhat.com:3128 HTTPS_PROXY=http://file.rdu.redhat.com:3128
With proxy, the build will failed for pushing image, so will add 'TestBlocker' keywords.
(In reply to zhou ying from comment #23) > With proxy, the build will failed for pushing image, so will add > 'TestBlocker' keywords. You should have ".svc" in NO_PROXY in /etc/sysconfig/docker is that not the case?
Scott Dodson: If we should have ".svc" in NO_PROXY in /etc/sysconfig/docker, then that should be installed by the install script by default.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188