Description of problem: Running the command suggested after oc new-project: kubectl create deployment hello-node --image=gcr.io/hello-minikube-zero-install/hello-node Results in: NAME READY STATUS RESTARTS AGE hello-node-855447ffcc-cjl8l 0/1 ImagePullBackOff 0 2m47s 86s Normal Pulling pod/hello-node-855447ffcc-cjl8l Pulling image "gcr.io/hello-minikube-zero-install/hello-node" 85s Warning Failed pod/hello-node-855447ffcc-cjl8l Failed to pull image "gcr.io/hello-minikube-zero-install/hello-node": rpc error: code = Unknown desc = Requesting bear token: invalid status code from registry 400 (Bad Request) 85s Warning Failed pod/hello-node-855447ffcc-cjl8l Error: ErrImagePull 56s Normal BackOff pod/hello-node-855447ffcc-cjl8l Back-off pulling image "gcr.io/hello-minikube-zero-install/hello-node" 70s Warning Failed pod/hello-node-855447ffcc-cjl8l Error: ImagePullBackOff Version-Release number of selected component (if applicable): 4.6.0-0.nightly-2020-08-27-005538 How reproducible: Always Steps to Reproduce: See above Expected results: Sample cli commands work. Additional info:
Confirmed with latest version , Image pull issue has fixed , but the pod still couldn't running: [root@dhcp-140-138 ~]# ./kubectl create deployment hello-node --image=k8s.gcr.io/echoserver:1.4 deployment.apps/hello-node created [root@dhcp-140-138 ~]# ./kubectl get po NAME READY STATUS RESTARTS AGE hello-node-7567d9fdc9-752dz 0/1 CrashLoopBackOff 3 86s [root@dhcp-140-138 ~]# ./kubectl describe po/hello-node-7567d9fdc9-752dz Name: hello-node-7567d9fdc9-752dz Namespace: zhouy Priority: 0 Node: ip-10-0-187-49.ap-northeast-2.compute.internal/10.0.187.49 Start Time: Tue, 01 Sep 2020 13:41:03 +0800 Labels: app=hello-node pod-template-hash=7567d9fdc9 Annotations: k8s.v1.cni.cncf.io/network-status: [{ "name": "", "interface": "eth0", "ips": [ "10.129.2.54" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: [{ "name": "", "interface": "eth0", "ips": [ "10.129.2.54" ], "default": true, "dns": {} }] openshift.io/scc: restricted Status: Running IP: 10.129.2.54 IPs: IP: 10.129.2.54 Controlled By: ReplicaSet/hello-node-7567d9fdc9 Containers: echoserver: Container ID: cri-o://71622c4af6bf6e47d566a69788a4621d424359077b9fe9f525e180ab80bb0c6a Image: k8s.gcr.io/echoserver:1.4 Image ID: k8s.gcr.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb Port: <none> Host Port: <none> State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Tue, 01 Sep 2020 13:41:28 +0800 Finished: Tue, 01 Sep 2020 13:41:28 +0800 Ready: False Restart Count: 2 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tgtfb (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: default-token-tgtfb: Type: Secret (a volume populated by a Secret) SecretName: default-token-tgtfb Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned zhouy/hello-node-7567d9fdc9-752dz to ip-10-0-187-49.ap-northeast-2.compute.internal Normal AddedInterface 42s multus Add eth0 [10.129.2.54/23] Normal Pulling 41s kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal Pulling image "k8s.gcr.io/echoserver:1.4" Normal Pulled 33s kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal Successfully pulled image "k8s.gcr.io/echoserver:1.4" in 8.041931132s Normal Created 18s (x3 over 33s) kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal Created container echoserver Normal Started 18s (x3 over 33s) kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal Started container echoserver Normal Pulled 18s (x2 over 33s) kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal Container image "k8s.gcr.io/echoserver:1.4" already present on machine Warning BackOff 4s (x4 over 32s) kubelet, ip-10-0-187-49.ap-northeast-2.compute.internal Back-off restarting failed container [root@dhcp-140-138 ~]# ./kubectl logs -f po/hello-node-7567d9fdc9-752dz 2020/09/01 05:42:39 [emerg] 1#1: mkdir() "/var/lib/nginx/proxy" failed (13: Permission denied) nginx: [emerg] mkdir() "/var/lib/nginx/proxy" failed (13: Permission denied)
The issue has fixed: [zhouying@dhcp-140-138 ~]$ oc new-project zhouyt1 Now using project "zhouyt1" on server "https://api.zy09096.qe.devcluster.openshift.com:6443". You can add applications to this project with the 'new-app' command. For example, try: oc new-app rails-postgresql-example to build a new example application in Ruby. Or use kubectl to deploy a simple Kubernetes application: kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname [zhouying@dhcp-140-138 ~]$ kubectl create deployment hello-node --image=k8s.gcr.io/serve_hostname deployment.apps/hello-node created [zhouying@dhcp-140-138 ~]$ oc get po NAME READY STATUS RESTARTS AGE hello-node-6d9d95bb88-4wmp5 1/1 Running 0 11s
*** Bug 1879051 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196