Created attachment 1757234 [details] deploying image Description of problem: From Dev console, when image is deployed from private or external registry where authentication is required is failing and pod is giving ImagePullBackOff error even the image pull secret is created during deployment through dev console. Error says: unauthorized where my test image aditya97/testhello is kept as private in docker hub. ~~~ Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m41s default-scheduler Successfully assigned aditya/hello-65c57ff498-225j2 to worker-0.sharedocp4upi46.lab.upshift.rdu2.redhat.com Normal AddedInterface 5m39s multus Add eth0 [10.128.2.51/23] Normal Pulling 4m16s (x4 over 5m39s) kubelet Pulling image "docker.io/aditya97/testhello@sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c07xxxxf6979b4812" Warning Failed 4m15s (x4 over 5m39s) kubelet Failed to pull image "docker.io/aditya97/testhello@sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c07xxxxf6979b4812": rpc error: code = Unknown desc = Error reading manifest sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c07xxxxf6979b4812 in docker.io/aditya97/testhello: errors: denied: requested access to the resource is denied unauthorized: authentication required Warning Failed 4m15s (x4 over 5m39s) kubelet Error: ErrImagePull Normal BackOff 4m3s (x6 over 5m38s) kubelet Back-off pulling image "docker.io/aditya97/testhello@sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c07xxxxf6979b4812" Warning Failed 35s (x21 over 5m38s) kubelet Error: ImagePullBackOff ~~~ Screenshots attached to show the deploy image procedure. When I manually link the created image pull secret(e.g. mysecret) to default serviceaccount it worked and image is getting pulled. # oc secrets link default mysecret --for=pull # oc describe sa default Name: default Namespace: aditya Labels: <none> Annotations: <none> Image pull secrets: default-dockercfg-rtvch mysecret <==== earlier it was not present (which is expected) Mountable secrets: default-token-gb65n default-dockercfg-rtvch Tokens: default-token-gb65n default-token-mchcr Events: <none> After linking image pull secret to default serviceaccount. ~~~ Normal Pulling 19s (x3 over 38s) kubelet Pulling image "docker.io/aditya97/testhello@sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c07xxxxf6979b4812" Normal Created 18s (x3 over 37s) kubelet Created container hello Normal Started 18s (x3 over 37s) kubelet Started container hello Normal Pulled 18s kubelet Successfully pulled image "docker.io/aditya97/testhello@sha256:1a6fd470b9ce10849be79e99529a88371dff60c60aab424c07xxxxf6979b4812" in 238.413536ms ~~~ Version-Release number of selected component (if applicable): OC 4.6 How reproducible: Always Steps to Reproduce: 1. Go to dev console 2. click on container image -> provide Image name from external registry -> click on create image pull secret -> create secret -> create application 3. see that pod is not able to pull the image using credentials in i9mage pull secret Actual results: ImagePullBackOff error where the created image pull secret through dev console is not getting linked to default serviceaccount. Expected results: ImagePullBackOff error should not be coming and image pull secret through dev console should get linked to default serviceaccount in the background automatically when it is created. Additional info: Screenshot attached
Created attachment 1757236 [details] creating image pull secret
Created attachment 1757237 [details] image being validated after secret creation
Created attachment 1757238 [details] imagepullbackoff error for newly created pod
Created attachment 1757239 [details] secret details from project ->details Default Pull secret : mysecret
Hi, this issue is known and already fixed in 4.7. See https://bugzilla.redhat.com/show_bug.cgi?id=1924955 We also plan to backport this to 4.6 and provide a fix with the next release. See https://bugzilla.redhat.com/show_bug.cgi?id=1926340 As workaround you can add the Secret to the `ServiceAccount` (as you have mentioned) or the the Deployment/DeploymentConfig manually. Our change fixes the linked `ImageStream` instead and automatically sets the "referencePolicy" property to force a local image lookup for the included "tag". So you can manually change your `ImageStream` like this: kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: nodeinfo spec: lookupPolicy: local: false tags: - name: latest annotations: openshift.io/generated-by: OpenShiftWebConsole openshift.io/imported-from: jerolimov/nodeinfo from: kind: DockerImage name: jerolimov/nodeinfo generation: 2 importPolicy: {} + referencePolicy: + type: Source It would be great if you can verify if this will help in your tests, with your registry as well. At the end I would recommend to close this as duplicate of 1924955 or 1926340 and share this information with the customer. See also: * https://docs.openshift.com/container-platform/4.6/rest_api/image_apis/imagestream-image-openshift-io-v1.html * https://docs.openshift.com/container-platform/4.6/openshift_images/using-imagestreams-with-kube-resources.html * https://issues.redhat.com/browse/BUILD-100 * https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ * https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
Can you check the infos above with the customer or did you need more info?
Thank you for providing the information. we will check the progress of bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1926340 for 4.6 backport. Workaround is already known. You can go ahead and mark this as duplicate.
*** This bug has been marked as a duplicate of bug 1926340 ***