Description of problem: Build OSE3.3 env with docker-registry 2.4.1, config the docker-registry with storage backend like Swift or S3. Create app and build image. The build will failed with error:Failed to push image: EOF. Version-Release number of selected component (if applicable): openshift v1.3.0-alpha.2+f51703b kubernetes v1.3.0+57fb9ac etcd 2.3.0+git openshift v3.3.0.5 kubernetes v1.3.0+57fb9ac etcd 2.3.0+git registry: version=v2.4.1 How reproducible: Always Steps to Reproduce: 1. Build OSE3.3 env with registry 2.4.1; 2. Config the registry with Swift or S3 backend; 3. Create a build. Actual results: 3. Build failed with message:error: build error: Failed to push image: EOF Expected results: 3. Build succeed. Additional info: Can produce this issue with ec2 instance and registry 2.4.1+S3 backend. Errors from docker-registry logs: time="2016-07-15T00:54:21.618901426-04:00" level=panic msg="Configuration error: OpenShift registry middleware not activated" 2016-07-15 00:54:21.619053 I | http: panic serving 10.1.1.1:41550: &{0xc8200144c0 map[] 2016-07-15 00:54:21.618901426 -0400 EDT panic Configuration error: OpenShift registry middleware not activated}
You're most probably missing some middleware sections from your registry config.yml file. Please update them according to our documentation: https://docs.openshift.org/latest/install_config/install/docker_registry.html#docker-registry-deploying-updated-configuration Please extend your config file at least for: middleware: registry: - name: openshift storage: - name: openshift Does it help?
Yes, when extend config file, the issue has fixed,thanks. middleware: registry: - name: openshift storage: - name: openshift repository: - name: openshift options: pullthrough: true openshift version openshift v3.3.0.12 kubernetes v1.3.0+57fb9ac etcd 2.3.0+git
When we use native s3 registry config by openshift-ansible, the docker-registry's middleware stanza should contain the following by default: registry: - name: openshift storage: - name: openshift
Will close this, and open a new.
Proposed PR: https://github.com/openshift/openshift-ansible/pull/2280
Michal, Is that additional bit of config valid for 2.0 registry too? As in, should we add this for OSE 3.2 installs as well?
(In reply to Scott Dodson from comment #6) > Is that additional bit of config valid for 2.0 registry too? As in, should > we add this for OSE 3.2 installs as well? No, it's not valid. I will be valid if we backport 2.4 registry though [1]. [1] https://github.com/openshift/ose/pull/314
So is there a document bug here? https://docs.openshift.com/enterprise/3.2/install_config/install/docker_registry.html#docker-registry-configuration-reference-middleware
@ghunag there was :-). It's fixed now. See [1]. [1] https://github.com/openshift/openshift-docs/pull/2630
Confirmed on today's build env, the issue has fixed. openshift version openshift v3.3.0.18 kubernetes v1.3.0+507d3a7 etcd 2.3.0+git middleware: registry: - name: openshift repository: - name: openshift options: acceptschema2: false pullthrough: true enforcequota: false projectcachettl: 1m blobrepositorycachettl: 10m storage: - name: openshift
> middleware: > registry: > - name: openshift > repository: > - name: openshift > options: > acceptschema2: false > pullthrough: true > enforcequota: false > projectcachettl: 1m > blobrepositorycachettl: 10m > storage: > - name: openshift This is a swift config file generated not by openshift-ansible. Currently openshift-ansible doesn't support swift backend storage for docker-registry, so we should check it on AWS. https://github.com/openshift/openshift-ansible/pull/2280 isn't in latest openshift-ansible rpm package. @Michal, we regard this bug as a installer bug. So I move to assigned first, and will check again once fixed in puddle.
Gan, That PR should be in both 3.2.22-1 and the 3.3 version today. Marking ON_QA.
Docker registry failed to deploy. "repository" is not in the right place as we expect. I'm not sure what's wrong.(it should be in the same level as "registry" and "storage") [root@ip-172-18-6-22 ~]# oc logs docker-registry-6-3j350 time="2016-08-15T06:42:43-04:00" level=fatal msg="Error parsing configuration file: yaml: line 28: did not find expected '-' indicator" Get registry-config: <--snip--> auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: pullthrough: True storage: - name: openshift
Gan, repository section needs to be at the same level as registry and storage like this: auth: openshift: realm: openshift middleware: registry: - name: openshift repository: - name: openshift options: pullthrough: True storage: - name: openshift
Oh I see, that's an ansible bug. Sorry for my useless comment. Moving back to ASSIGNED.
I'm confused that I got a different config.yml today in ansible-2.2.0-0.5.prerelease.el7.noarch. Not sure the issue is related to ansible version. <--snip--> auth: openshift: realm: openshift middleware: repository: - name: openshift options: pullthrough: True
https://github.com/openshift/openshift-ansible/pull/2314 fixes the padding. Part of the difficulty in testing this is that the secret is not overwritten on subsequent runs. We should decide if we should do that or not in the future.
I tried it out using master branch. repository is at the right level. But looks like "openshift.common.version_gte_3_3_or_1_3" was set to false. Get the config.yml: <--snip--> auth: openshift: realm: openshift middleware: repository: - name: openshift options: pullthrough: True
Remove the dependence of BZ#1368034 due to it's a edge case to set "openshift_image_tag=v3.3".
Confirmed on aws, the issue has fixed: middleware: registry: - name: openshift repository: - name: openshift options: pullthrough: True storage: - name: openshift openshift version openshift v3.3.0.24-dirty kubernetes v1.3.0+507d3a7 etcd 2.3.0+git will verify it.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1933