Description of problem: On Openshift on Z, when trying to use the perl sample base from the developer samples, created image stream uses docker.io/library/perl-sample image which does not exist. Version-Release number of selected component (if applicable): On multiple 4.8 builds including 4.8.0-0.nightly-s390x-2021-04-13-111422 Tried with builder image versions: 5.30-el7,5.30-ubi8, and latest How reproducible: Every time Steps to Reproduce: 1. Log into new s390x ocp cluster. 2. In developer tab, go to "+Add", then click samples. 3. Select "Perl" button, and choose any of the given builder image versions. 4. Click "Create" 5. Check pod events for error Actual results: Pod is stuck in imagepullbackoff and events show: Failed to pull image "perl-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/perl-sample: errors: denied: requested access to the resource is denied unauthorized: authentication required Expected results: Sample application should start like all the other language samples.
For OCP, the samples installed by the samples cluster operator do not point to docker.io/library/perl-sample the perl imagestream from samples all point to registry.redhat.io/ubi8/perl* images per https://github.com/openshift/cluster-samples-operator/blob/master/assets/operator/ocp-s390x/perl/imagestreams/perl-rhel.json this must be a dev console specific sample
The dev console does not own or manipulate any of the samples. It uses the ImageStreams which are installed in the openshift namespace. While the build is running the pod will have an error (as tested on 4.8.0-0.nightly-2021-04-19-071934): Failed to pull image "image-registry.openshift-image-registry.svc:5000/cvogt/perl-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in image-registry.openshift-image-registry.svc:5000/cvogt/perl-sample: manifest unknown: manifest unknown I don't know where `docker.io/library/perl-sample` is coming from. Your error is also describing an authorization error. Could you please provide the contents of the `perl` ImageStream in the `openshift` namespace: oc get imagestreams -n openshift -o yaml perl Also what permissions does your user have?
This is using a cluster admin user. Acually the perl imagestream in the openshift namespace has the following which appears correct: ``` kind: DockerImage name: registry.redhat.io/ubi8/perl-526:latest ``` But, checking through the perl-samples deployment it appears as if its calling the image stream perl-sample that gets created in the same namespace (default in this case). ``` $ oc get deployment perl-sample -o yaml -n default apiVersion: apps/v1 kind: Deployment metadata: annotations: alpha.image.policy.openshift.io/resolve-names: '*' app.openshift.io/vcs-ref: "" app.openshift.io/vcs-uri: https://github.com/sclorg/dancer-ex.git deployment.kubernetes.io/revision: "1" image.openshift.io/triggers: '[{"from":{"kind":"ImageStreamTag","name":"perl-sample:latest","namespace":"default"},"fieldPath":"spec.template.spec.containers[?(@.name==\"perl-sample\")].image","pause":"false"}]' openshift.io/generated-by: OpenShiftWebConsole creationTimestamp: "2021-04-19T20:56:37Z" generation: 1 labels: app: perl-sample app.kubernetes.io/component: perl-sample app.kubernetes.io/instance: perl-sample app.kubernetes.io/name: perl-sample app.kubernetes.io/part-of: sample-app app.openshift.io/runtime: perl app.openshift.io/runtime-version: 5.30-el7 managedFields: - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:alpha.image.policy.openshift.io/resolve-names: {} f:app.openshift.io/vcs-ref: {} f:app.openshift.io/vcs-uri: {} f:image.openshift.io/triggers: {} f:openshift.io/generated-by: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/component: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:app.openshift.io/runtime: {} f:app.openshift.io/runtime-version: {} f:spec: f:progressDeadlineSeconds: {} f:replicas: {} f:revisionHistoryLimit: {} f:selector: {} f:strategy: f:rollingUpdate: .: {} f:maxSurge: {} f:maxUnavailable: {} f:type: {} f:template: f:metadata: f:labels: .: {} f:app: {} f:deploymentconfig: {} f:spec: f:containers: k:{"name":"perl-sample"}: .: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:ports: .: {} k:{"containerPort":8080,"protocol":"TCP"}: .: {} f:containerPort: {} f:protocol: {} f:resources: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:dnsPolicy: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: {} f:terminationGracePeriodSeconds: {} manager: Mozilla operation: Update time: "2021-04-19T20:56:37Z" - apiVersion: apps/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:deployment.kubernetes.io/revision: {} f:status: f:conditions: .: {} k:{"type":"Available"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} k:{"type":"Progressing"}: .: {} f:lastTransitionTime: {} f:lastUpdateTime: {} f:message: {} f:reason: {} f:status: {} f:type: {} f:observedGeneration: {} f:replicas: {} f:unavailableReplicas: {} f:updatedReplicas: {} manager: kube-controller-manager operation: Update time: "2021-04-19T20:56:37Z" name: perl-sample namespace: default resourceVersion: "1324050" uid: 92fb026f-0b2c-4f51-8662-9b458cbb6124 spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: app: perl-sample strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: creationTimestamp: null labels: app: perl-sample deploymentconfig: perl-sample spec: containers: - image: perl-sample:latest imagePullPolicy: Always name: perl-sample ports: - containerPort: 8080 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 status: conditions: - lastTransitionTime: "2021-04-19T20:56:37Z" lastUpdateTime: "2021-04-19T20:56:37Z" message: Deployment does not have minimum availability. reason: MinimumReplicasUnavailable status: "False" type: Available - lastTransitionTime: "2021-04-19T21:06:38Z" lastUpdateTime: "2021-04-19T21:06:38Z" message: ReplicaSet "perl-sample-7586d5857c" has timed out progressing. reason: ProgressDeadlineExceeded status: "False" type: Progressing observedGeneration: 1 replicas: 1 unavailableReplicas: 1 updatedReplicas: 1 ``` ``` $ oc get imagestream perl-sample -o yaml -n default apiVersion: image.openshift.io/v1 kind: ImageStream metadata: annotations: app.openshift.io/vcs-ref: "" app.openshift.io/vcs-uri: https://github.com/sclorg/dancer-ex.git openshift.io/generated-by: OpenShiftWebConsole creationTimestamp: "2021-04-19T20:56:36Z" generation: 1 labels: app: perl-sample app.kubernetes.io/component: perl-sample app.kubernetes.io/instance: perl-sample app.kubernetes.io/name: perl-sample app.kubernetes.io/part-of: sample-app app.openshift.io/runtime: perl app.openshift.io/runtime-version: 5.30-el7 managedFields: - apiVersion: image.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:app.openshift.io/vcs-ref: {} f:app.openshift.io/vcs-uri: {} f:openshift.io/generated-by: {} f:labels: .: {} f:app: {} f:app.kubernetes.io/component: {} f:app.kubernetes.io/instance: {} f:app.kubernetes.io/name: {} f:app.kubernetes.io/part-of: {} f:app.openshift.io/runtime: {} f:app.openshift.io/runtime-version: {} manager: Mozilla operation: Update time: "2021-04-19T20:56:36Z" name: perl-sample namespace: default resourceVersion: "1321000" uid: d35574d8-1faa-4dbe-b21a-58a8b94cb347 spec: lookupPolicy: local: false status: dockerImageRepository: "" ```
Does not seem to be reproducible on 4.8.0-0.nightly-2021-04-26-151924 I was successfully able to create perl sample deployments using both versions 5.30-el7 & 5.30-ubi8, with the kubeadmin user. And the created ImageStream uses internal registry as expected.
Mohammed Saud, I've seen the opposite. On a fresh install of on 4.8.0-0.nightly-s390x-2021-04-29-131457 on zVM it looks as if all sample images now fail. Each of the 9 sample projects fails with a imagePullBackoff error. cvogt above you said that these sample applications should use the image streams in the openshift namespace. When I create these applications they create their own imagestreams in the default namespace, is this expected behavior? ``` tdale@fedora -> % oc get is -n default NAME IMAGE REPOSITORY TAGS UPDATED golang-sample httpd-sample java-sample nodejs-sample perl-sample perl-sample2 php-sample python-sample ruby-sample tdale@fedora -> % oc get events | grep Failed 20m Warning Failed pod/golang-sample-5496977d8b-hb29c Failed to pull image "golang-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/golang-sample: errors: 20m Warning Failed pod/golang-sample-5496977d8b-hb29c Error: ErrImagePull 19m Warning Failed pod/golang-sample-5496977d8b-hb29c Error: ImagePullBackOff 18m Warning Failed pod/httpd-sample-69bd784cd6-6p4rg Failed to pull image "httpd-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/httpd-sample: errors: 18m Warning Failed pod/httpd-sample-69bd784cd6-6p4rg Error: ErrImagePull 18m Warning Failed pod/httpd-sample-69bd784cd6-6p4rg Error: ImagePullBackOff 17m Warning Failed pod/java-sample-67f7997bcd-jgq42 Failed to pull image "java-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/java-sample: errors: 17m Warning Failed pod/java-sample-67f7997bcd-jgq42 Error: ErrImagePull 17m Warning Failed pod/java-sample-67f7997bcd-jgq42 Error: ImagePullBackOff 19m Warning BuildConfigTriggerFailed buildconfig/java-sample error triggering Build for BuildConfig default/java-sample: Internal error occurred: build config default/java-sample has already instantiated a build for imageid registry.redhat.io/openjdk/openjdk-11-rhel7@sha256:2d1582d81b37253dc482ef995113799d6ff391e145f2b022d38cd5f7582f62e0 18m Warning Failed pod/nodejs-sample-78bd6ccbcc-t46qm Failed to pull image "nodejs-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/nodejs-sample: errors: 18m Warning Failed pod/nodejs-sample-78bd6ccbcc-t46qm Error: ErrImagePull 17m Warning Failed pod/nodejs-sample-78bd6ccbcc-t46qm Error: ImagePullBackOff 26m Warning Failed pod/perl-sample-7586d5857c-2sv4k Failed to pull image "perl-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/perl-sample: errors: 26m Warning Failed pod/perl-sample-7586d5857c-2sv4k Error: ErrImagePull 26m Warning Failed pod/perl-sample-7586d5857c-2sv4k Error: ImagePullBackOff 22m Warning Failed pod/perl-sample2-9f9f86b49-sm8lx Failed to pull image "perl-sample2:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/perl-sample2: errors: 22m Warning Failed pod/perl-sample2-9f9f86b49-sm8lx Error: ErrImagePull 22m Warning Failed pod/perl-sample2-9f9f86b49-sm8lx Error: ImagePullBackOff 15m Warning Failed pod/php-sample-765c6b6658-glsx8 Failed to pull image "php-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/php-sample: errors: 15m Warning Failed pod/php-sample-765c6b6658-glsx8 Error: ErrImagePull 16m Warning Failed pod/php-sample-765c6b6658-glsx8 Error: ImagePullBackOff 14m Warning Failed pod/python-sample-5f98dbdf5f-h42qj Failed to pull image "python-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/python-sample: errors: 14m Warning Failed pod/python-sample-5f98dbdf5f-h42qj Error: ErrImagePull 14m Warning Failed pod/python-sample-5f98dbdf5f-h42qj Error: ImagePullBackOff 14m Warning Failed pod/ruby-sample-76689b6959-xhwf7 Failed to pull image "ruby-sample:latest": rpc error: code = Unknown desc = Error reading manifest latest in docker.io/library/ruby-sample: errors: 14m Warning Failed pod/ruby-sample-76689b6959-xhwf7 Error: ErrImagePull 14m Warning Failed pod/ruby-sample-76689b6959-xhwf7 Error: ImagePullBackOff ... ```
Hey all, I'm still seeing this issue on version 4.8.0-0.nightly-s390x-2021-05-27-181756 , I can't succefully install any of the sample applications in the gui. I did notice an error "InvalidOutputReference" that I hadn't noticed earlier in the buildconfig (the python-sample application in this example. but I see the same for all of them.) ``` oc get builds python-sample-1 -o yaml apiVersion: build.openshift.io/v1 kind: Build metadata: ... spec: nodeSelector: null output: to: kind: ImageStreamTag name: python-sample:latest postCommit: {} resources: {} serviceAccount: builder source: git: uri: https://github.com/sclorg/django-ex.git type: Git strategy: sourceStrategy: from: kind: DockerImage name: registry.redhat.io/ubi7/python-38@sha256:04a619bea861d291d55a1b53eca56ea2228a88cd466ad08a1b38ecea698ffd80 type: Source triggeredBy: - imageChangeBuild: fromRef: kind: ImageStreamTag name: python:3.8-ubi7 namespace: openshift imageID: registry.redhat.io/ubi7/python-38@sha256:04a619bea861d291d55a1b53eca56ea2228a88cd466ad08a1b38ecea698ffd80 message: Image change status: conditions: - lastTransitionTime: "2021-06-02T20:28:01Z" lastUpdateTime: "2021-06-02T20:28:01Z" message: Output image could not be resolved. reason: InvalidOutputReference status: "True" type: New config: kind: BuildConfig name: python-sample namespace: default message: Output image could not be resolved. output: {} phase: New reason: InvalidOutputReference ```
Created attachment 1788761 [details] yaml output from python-sample deployment, imagestream, buildconfig, build, and python imagestream in openshift namespace
I've checked all samples applications on the latest 2 4.8 builds for s390x and no longer am seeing this problem. Perl and the other samples are running fine.
Hey Tom, thanks. I'm currently checking this and one idea is that it was fixed with https://github.com/openshift/console/pull/8222. Which is part of 4.8. I will check anyway if I can find another potential reason. But thanks for the update.
Created attachment 1792626 [details] Create samples steps.md
Hey Tom, the refereed pull request was merged end of February for 4.8, exactly: * 4.8.0 (merged 2021-02-26, at least part of 4.8.0-0.nightly-2021-03-01-031258, ticket https://bugzilla.redhat.com/show_bug.cgi?id=1933664) * 4.7.5 (issued 2021-04-05, https://access.redhat.com/errata/RHSA-2021:1005, ticket https://bugzilla.redhat.com/show_bug.cgi?id=1933665) * 4.6.28 (issued 2021-05-12, https://access.redhat.com/errata/RHBA-2021:1487, ticket https://bugzilla.redhat.com/show_bug.cgi?id=1933666) As you said that the issue happen in 4.8.0-0.nightly-s390x-2021-04-13-111422 and still exist in 4.8.0-0.nightly-s390x-2021-05-27-181756 but not anymore at latest 2 builds before 2021-06-18. That's strange that it is not included in the builds before. The latest s390x build before 2021-06-18, which also includes a console update was "4.8.0-0.nightly-s390x-2021-06-16-012945" [1]. console git 4890229b contains the change mentioned in the comment above [2]. [1] https://s390x.ocp.releases.ci.openshift.org/releasestream/4.8.0-0.nightly-s390x/release/4.8.0-0.nightly-s390x-2021-06-16-012945 [2] https://github.com/openshift/console/blob/4890229b7a6ef80b99113eb16761b91a8da166a6/frontend/packages/dev-console/src/components/import/import-submit-utils.ts#L526 Unfortunately, the old nightly build logs are not available anymore, and I could not start such build anymore [3]. [3] https://s390x.ocp.releases.ci.openshift.org/#4.8.0-0.nightly-s390x Because you also can not reproduce this anymore (aka you has verified this works now as expected), I think we can now close this ticket. I will close this now as duplicate of 1933664. Feel free to re-open it if find this issue again. I'm sorry that we don't looked into this fast enough to reproduce this our self with the specific nightly builds. But thanks for you attached logs. I believe the message "Output image could not be resolved." is an indicator that the resources are created in an unexpected order and that the Build failed because the output ImageStream/Tag wasn't available at that moment. *** This bug has been marked as a duplicate of bug 1933664 ***