Checked with v3.9.30, and the issue still can be reproduced. # oc describe pods heapster-86dd5b8544-p84p7 -n 6z5mi Name: heapster-86dd5b8544-p84p7 Namespace: 6z5mi Node: qe-wjiang-gce-docker-nrr-1/10.240.0.73 Start Time: Sun, 10 Jun 2018 23:13:22 -0400 Labels: k8s-app=heapster pod-template-hash=4288164100 task=monitoring Annotations: openshift.io/scc=restricted Status: Pending IP: 10.129.0.48 Controlled By: ReplicaSet/heapster-86dd5b8544 Containers: heapster: Container ID: Image: openshift3/metrics-heapster:latest Image ID: Port: <none> Command: heapster-wrapper.sh --api-server --bind-address=0.0.0.0 --secure-port=8443 --requestheader-client-ca-file=/var/run/kubernetes/request-header-ca.crt --tls-ca-file=/var/run/kubernetes/client-ca.crt --source=kubernetes:https://kubernetes.default.svc?kubeletPort=10250&kubeletHttps=true --sink=influxdb:http://monitoring-influxdb.6z5mi.svc:8086 --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --requestheader-extra-headers-prefix=X-Remote-Extra- --cert-dir=/tmp State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment: <none> Mounts: /var/run/kubernetes from ca (rw) /var/run/secrets/kubernetes.io/serviceaccount from heapster-token-tlk46 (ro) Conditions: Type Status Initialized True Ready False PodScheduled True Volumes: ca: Type: ConfigMap (a volume populated by a ConfigMap) Name: cert-configmap Optional: false heapster-token-tlk46: Type: Secret (a volume populated by a Secret) SecretName: heapster-token-tlk46 Optional: false QoS Class: BestEffort Node-Selectors: node-role.kubernetes.io/compute=true Tolerations: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 7m default-scheduler Successfully assigned heapster-86dd5b8544-p84p7 to qe-wjiang-gce-docker-nrr-1 Normal SuccessfulMountVolume 7m kubelet, qe-wjiang-gce-docker-nrr-1 MountVolume.SetUp succeeded for volume "ca" Normal SuccessfulMountVolume 7m kubelet, qe-wjiang-gce-docker-nrr-1 MountVolume.SetUp succeeded for volume "heapster-token-tlk46" Normal Pulling 6m (x2 over 7m) kubelet, qe-wjiang-gce-docker-nrr-1 pulling image "openshift3/metrics-heapster:latest" Warning Failed 6m (x2 over 6m) kubelet, qe-wjiang-gce-docker-nrr-1 Failed to pull image "openshift3/metrics-heapster:latest": rpc error: code = Unknown desc = repository docker.io/openshift3/metrics-heapster not found: does not exist or no pull access Warning Failed 6m (x2 over 6m) kubelet, qe-wjiang-gce-docker-nrr-1 Error: ErrImagePull Warning Failed 6m (x5 over 6m) kubelet, qe-wjiang-gce-docker-nrr-1 Error: ImagePullBackOff Normal SandboxChanged 6m (x7 over 6m) kubelet, qe-wjiang-gce-docker-nrr-1 Pod sandbox changed, it will be killed and re-created. Normal BackOff 1m (x42 over 6m) kubelet, qe-wjiang-gce-docker-nrr-1 Back-off pulling image "openshift3/metrics-heapster:latest"
Checked with v3.9.31 and this issue can not be reproduced now.
Checked again with v3.9.31 and found that only latest tag will work as expected. // ImagePullBackOff oc run aaaaa --image=openshift3/metrics-heapster:v3.9.31 --command sleep 10d // Running - and use registry.reg-aws.openshift.com:443 registry oc run aaaa --image=openshift3/metrics-heapster:latest --command sleep 10d
*** Bug 1591506 has been marked as a duplicate of this bug. ***
Please do not fail QA unless the failure occurs on 3.9.31 but not on 3.9.29 (release before the PR that is reverted in this bug). Only fail QA if you have output indicating that a test that worked 3.9.29 no longer works in 3.9.31. If you have a test that fails for both, that is a new bug, not a reason to fail QA on this bug.
Checked with v3.9.31 again, and can not reproduce the original issue , so move to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2013