Description of problem: Since upgrading to OS 3.3, when trying to push and image from docker to the Openshift internal registry , I get the error message: # docker push docker-registry.example.com:80/username/nginx The push refers to a repository [docker-registry.kermit-beta.example.com:80/username/nginx] 896eeed30318: Preparing 3f9de3fe61a7: Preparing 142a601d9793: Preparing unauthorized: authentication required Version-Release number of selected component (if applicable): Openshift Container Platform 3.3 How reproducible: On customer side Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
This is related to bug 1371423. Upstream issue is described at: https://github.com/docker/docker/issues/18469 Upstream docker/distribution fix is at: https://github.com/docker/distribution/pull/1868 TL;DR: if a client or a proxy (here an haproxy) appends :80 or :443 suffix to the name of registry requiring authentication during a blob upload request, the upload will fail. We might need to back-port the upstream fix. I'll try to find a workaround in the meantime.
One workaround is to secure the registry: https://docs.openshift.org/latest/install_config/registry/securing_and_exposing_registry.html
There's another workaround. Modify the router in a following way: oc env dc/router ROUTER_SERVICE_HTTPS_PORT=${SOME_DUMMY_PORT} Unfortunately, now it's not possible to tell haproxy router not to bind to https port at all. The above makes it bind to a dummy port (e.g. 44443) that normal client wouldn't attempt to use. During a login, docker will fail with 443 so it will fall back to 80 and succeed. Please make sure to remove `:80` suffix from all the commands and configs above.
Extending comment 11: to create router from scratch with the mentioned change applied, run: oadm router -o json | oc env -f - --output=json ROUTER_SERVICE_HTTPS_PORT=44443 | oc create -f - I thought another workaround would be to use non-default http port (like :5000). So the push would look like: docker push docker-registry.example.com:5000/username/nginx But that hits limitations of our router (see issue https://github.com/openshift/origin/issues/11337).
Miheer, is one of the provided workarounds a feasible solution for the customer?
I hope the secure approach works for you. In case of modifying a router, you could should use "router shards" as described here: https://docs.openshift.org/latest/install_config/router/default_haproxy_router.html#using-router-shards In short, you would modify your current router to handle the only the secure routes: oc env dc/router ROUTE_LABELS='insecure != true' ROUTER_SERVICE_HTTP_PORT=48880 # remove port 80 from router's exposed ports oc get -o json dc/router | jq '.spec.template.spec.containers[0].ports |= [ .[] | select(.hostPort != 80)]' | oc replace -f - And you would deploy one additional router for insecure connections (or just for the registry) with: oadm router -o json insecure-router | oc env -f - --output=json ROUTER_SERVICE_HTTPS_PORT=44443 ROUTE_LABELS='insecure=true' | oc create -f - You should then label all your routes with insecure label e.g.: oc label routes --all insecure=false # for all namespaces oc label route/docker-registry insecure=true
The fix https://github.com/openshift/origin/pull/11391 could be merged by the end of this week into the master. So it could be bundled in the next errata.
Marking as upcoming release as this is not regression and we will eventually have passthrough, but not in 3.4 release.
https://github.com/openshift/origin/pull/11391 has been merged into origin. Finally.
Run the test on origin env, unfortunately, the result is not expected: 1. env info: openshift v1.5.0-alpha.0+42ad22e-478 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 version=v2.5.1+unknown 2. [root@ip-172-18-12-255 ~]# oc expose svc docker-registry route "docker-registry" exposed [root@ip-172-18-12-255 ~]# oc get route NAME HOST/PORT PATH SERVICES PORT TERMINATION docker-registry docker-registry-default.router.default.svc.cluster.local docker-registry 5000-tcp [root@ip-172-18-12-255 ~]# [root@ip-172-18-12-255 ~]# oc get route -o yaml apiVersion: v1 items: - apiVersion: v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" creationTimestamp: 2016-12-21T09:00:01Z labels: docker-registry: default name: docker-registry namespace: default resourceVersion: "906" selfLink: /oapi/v1/namespaces/default/routes/docker-registry uid: db40ecb0-c75b-11e6-b4ba-0e02c4479cc8 spec: host: docker-registry-default.router.default.svc.cluster.local port: targetPort: 5000-tcp to: kind: Service name: docker-registry weight: 100 wildcardPolicy: None status: ingress: - conditions: - lastTransitionTime: 2016-12-21T09:00:01Z status: "True" type: Admitted host: docker-registry-default.router.default.svc.cluster.local routerName: router wildcardPolicy: None kind: List metadata: {} [root@ip-172-18-12-255 ~]# [root@ip-172-18-12-255 ~]# vim /etc/sysconfig/docker the /etc/sysconfig/docker file: OPTIONS='--insecure-registry=172.30.0.0/16 --insecure-registry=docker-registry-default.router.default.svc.cluster.local:80 --insecure-registry=ci.dev.openshift.redhat.com:5000 --selinux-enabled --log-driver=journald' [root@ip-172-18-12-255 ~]# systemctl restart docker [root@ip-172-18-12-255 ~]# curl -v http://docker-registry-default.router.default.svc.cluster.local/healthz * About to connect() to docker-registry-default.router.default.svc.cluster.local port 80 (#0) * Trying 54.89.122.100... * Connection timed out * Failed connect to docker-registry-default.router.default.svc.cluster.local:80; Connection timed out * Closing connection 0 curl: (7) Failed connect to docker-registry-default.router.default.svc.cluster.local:80; Connection timed out [root@ip-172-18-12-255 ~]#
Have you deployed a router? Also, does `docker-registry-default.router.default.svc.cluster.local` resolve to the IP of you host? After adding "192.168.122.81 docker-registry-default.router.default.svc.cluster.local" to my `/etc/hosts` and deploying router, I'm able to curl the registry just fine using: curl -I -kv http://docker-registry-default.router.default.svc.cluster.local/healthz The `docker login -u joe -p $token -e unused docker-registry-default.router.default.svc.cluster.local:80` works as well.
Verified in origin env: openshift v1.5.0-alpha.1+15872c0-61 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 after added: "172.xx.11.xxx docker-registry-default.router.default.svc.cluster.local" into /etc/hosts, the curl operation passed. And after docker login, the docker pull/push passed: # docker login -u geliu -p DGqNtTnBilFmi-D73S6oK_sxlzXO_eLkQnDrv9XW_4o -e geliu docker-registry-default.router.default.svc.cluster.local:80 # docker pull docker-registry-default.router.default.svc.cluster.local:80/lgproj/jenkins Using default tag: latest Trying to pull repository docker-registry-default.router.default.svc.cluster.local:80/lgproj/jenkins ... latest: Pulling from docker-registry-default.router.default.svc.cluster.local:80/lgproj/jenkins 30cf2e26a24f: Already exists 99dd41655d8a: Already exists 6d9f4445f395: Pull complete Digest: sha256:79ad4b35f8764f487ae84ee63ea06b301eb10b2d8999f91557419dc5d75b0698 Status: Downloaded newer image for docker-registry-default.router.default.svc.cluster.local:80/lgproj/jenkins:latest # docker push docker-registry-default.router.default.svc.cluster.local:80/lgproj/ruby:99 The push refers to a repository [docker-registry-default.router.default.svc.cluster.local:80/lgproj/ruby] a57b60975d9f: Layer already exists 6d52a89cc6dc: Layer already exists 65dfb1d31a1e: Layer already exists b452c96f0223: Layer already exists 99: digest: sha256:edd63249b81c3530500e51bfe27ced20a5d314e195b3479f1035328b5b47df8e size: 7373
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0884