| Summary: | [preview] authentication error when pulling from a private Docker Hub repo | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Jiří Fiala <jfiala> | ||||||
| Component: | apiserver-auth | Assignee: | Abhishek Gupta <abhgupta> | ||||||
| Status: | CLOSED WONTFIX | QA Contact: | Chuan Yu <chuyu> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | medium | ||||||||
| Version: | unspecified | CC: | aos-bugs, dakini, jgoncalv, jokerman, maszulik, mfojtik, miminar, mmahut, mmccomas, wgordon, xtian | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | --- | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2017-06-15 07:10:01 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Attachments: |
|
||||||||
|
Description
Jiří Fiala
2016-12-07 12:48:26 UTC
Created attachment 1229053 [details]
events
Can you verify the secret was actually linked to the SA? oc get sa default FYI, I tried to reproduce this locally on 1.4 (master) and it worked fine. I mean I created the pull secret for my DockerHub and then kubelet was able to pull the image (it gives me auth error before). I wonder if this might be a timing issue or something that is broken on 3.3 (have to try there). This was induced on Online Dev Preview (OpenShift Master: v3.3.1.3)
I can confirm the secret is linked:
--
oc get sa default
NAME SECRETS AGE
default 3 22d
oc describe sa default
Name: default
Namespace: yap
Labels: <none>
Image pull secrets: default-dockercfg-ej7ui
gitlab-manual
pull-secret
Mountable secrets: default-token-07vuo
default-dockercfg-ej7ui
gitlab-manual2
another-pull-secret
Tokens: default-token-07vuo
default-token-xtrbe
--
From the user, it would appear that the secrets issue is only apparent when using from CLI tools: --- I went ahead and created a template with a DockerHub pull secret and it works fine for GitHub and GitLab. objects: - apiVersion: v1 kind: ImageStream metadata: name: openerp spec: dockerImageRepository: registry.gitlab.com/eupraxialabs/openerp pullsecret: name: DockerHub or docker.io/eupraxialabs/openerp --- Jiri: Can you check if the ImageStream you creates has the right pull secret attached? I suspect a problem with dockercfg registry name. Could you please provide the keys (registry names) in the dockercfg in question?
You can get it for example with:
oc describe secret <SECRET_NAME> | sed -n 's/^\.dockercfg:\s*//p' | jq 'keys'
There was a bug in docker which caused dockercfg to be created with improper keys (something that docker upstream wouldn't create) and OpenShift was unable to match them against docker.io.
My keys in the dockercfg look like this:
$ oc describe secret miminar-docker-hub-cfg | sed -n 's/^\.dockercfg:\s*//p' | jq 'keys'
[
"https://index.docker.io/v1/"
]
I've requested accounts "OpenShift Online (Next Gen) Developer Preview" and "OpenShift Online 3 Developer Preview" accounts to verify there.
@jfiala: I've followed the steps (on recent master, but it doesn't differ that much from what was release in 3.4) and it just works with a few different possibilities how to pass image name. As soon as I unlink the secret and re-try the new-app it won't run. Linking it back and re-trying the deployment worked as well. I've additionally verified this on: $ oc version oc v1.3.1+ac1d579 kubernetes v1.3.0+52492b4 features: Basic-Auth Server https://localhost:8443 openshift v1.3.1+ac1d579 kubernetes v1.3.0+52492b4 and it's working as expected, following the steps described in the first comment. @miminar: The secret was created from .docker/config.json (that config works for docker daemon when pulling the image in question); 'describe' produces this: -- oc describe secret pull-secret Name: pull-secret Namespace: yap Labels: <none> Annotations: <none> Type: kubernetes.io/dockerconfigjson Data ==== .dockerconfigjson: 210 bytes -- I have re-created secret by passing the credentials using: oc secret new-dockercfg pull-secret- --docker-username=.. etc. Then, the registry server is as expected: -- oc describe secret pull-secret | sed -n 's/^\.dockercfg:\s*//p' | jq 'keys' [ "https://index.docker.io/v1/" ] -- I have linked the new secret, as well as the kubernetes.io/dockerconfigjson one, to the default sa and re-tested, with the same result. @mfojtik: Not sure how can I tell that. Here's the image stream: -- apiVersion: v1 kind: ImageStream metadata: name: testpriv namespace: yap selfLink: /oapi/v1/namespaces/yap/imagestreams/testpriv uid: ad3d8ba5-bd27-11e6-9b27-0e63b9c1c48f resourceVersion: '497311049' generation: 2 creationTimestamp: '2016-12-08T09:21:18Z' labels: app: testpriv annotations: openshift.io/generated-by: OpenShiftNewApp openshift.io/image.dockerRepositoryCheck: '2016-12-08T09:21:19Z' spec: tags: - name: latest annotations: openshift.io/imported-from: fiala82/testpriv from: kind: DockerImage name: fiala82/testpriv generation: 2 importPolicy: status: dockerImageRepository: '172.30.47.227:5000/yap/testpriv' tags: - tag: latest items: - created: '2016-12-08T09:21:19Z' dockerImageReference: 'fiala82/testpriv@sha256:e32c748ff75dde4f5e6bdf9a3ff53431f43f5629cc3e7f2b09329ac5f30205c3' image: 'sha256:e32c748ff75dde4f5e6bdf9a3ff53431f43f5629cc3e7f2b09329ac5f30205c3' generation: 2 -- I'm still unable to reproduce locally. But I've verified the error with both
Server https://api.dev-preview-stg.openshift.com:443
openshift v3.3.1.4
kubernetes v1.3.0+52492b4
and
Server https://api.preview.openshift.com:443
openshift v3.3.1.3
kubernetes v1.3.0+52492b4
With the latter, I wasn't able to get past the image import when using `oc
secrets new pull-secret`. The error was:
$ oc describe pods
...
29s 11s 2 {kubelet ip-172-31-8-200.ec2.internal} spec.containers{registry} Normal Pulling pulling image "miminar/registry:1.2.3.4"
27s 10s 2 {kubelet ip-172-31-8-200.ec2.internal} spec.containers{registry} Warning Failed Failed to pull image "miminar/registry:1.2.3.4": Error: image miminar/registry not found
27s 10s 2 {kubelet ip-172-31-8-200.ec2.internal} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "registry" with ErrImagePull: "Error: image miminar/registry not found"
When I used `oc secrets new-dockercfg`. I got the same error as listed above
and the same as on https://api.dev-preview-stg.openshift.com:443.
Can we get registry log from `https://api.preview.openshift.com:443` cluster
mentioning at least one of the following images:
- miminar/registry
- fiala82/testpriv
- eupraxialabs/389ds
?
Also what's the version docker daemon there?
Marking this as upcoming release after talking with Maciej. We can't reproduce this locally and probably will need some more data from online to see what is going on. Also there seems to be at least one workaround for the moment. Will, can you please describe how exactly the issue was fixed for the user? He added the secrets into a template (but he also have to link them into the default SA I assume). I've additionally verified the steps against OSE v3.4.0.33+59c4d51-1 works as expected. With all that it looks like the problem is with the online environment. I'm moving this to Online team, since it's something online team should address. OpenShift Online Preview has been decommissioned, go to https://manage.openshift.com/ for using OpenShift Online starter cluster |